var/home/core/zuul-output/0000755000175000017500000000000015136675616014545 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136705023015473 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000323346315136704733020275 0ustar corecoreۉ{ikubelet.log]o[=r+BrvEZƐȒ!ɦ[M cSy-Hgf1p'ZA8Sgm&mow|_v-VgY񎷷?.y7O?o߾n^Xo/ixK>|/w1OolW_~~yiw|V/']n{?|v^mRl8T*v (6pk**+ Le*gUWi [ӊg*~aT(`KZ)&@j{ C2i1Gdģ _$Kٻւ(Ĩ$#TLX h~lys%f6:SFAgf΀QՇ2K݉$ӎN;IXF :͎7sL0x.`6)ɚL}ӄ]C }I4Vv@%٘e#dc0Fn 촂iHSr`X_7̝4?qޗh/9Y@$9GAOI}g4XBu!.F"`a"BD) ᗁQZ-D\h]Q!] A̴UX-?0haxC}~xv<.뉘 vKArݝE)~AGGAj^3}wyπ{4훉7[qLFyS \zkQumUi_c [Adt:zG "GP8[a Ⱥw^eH6'Ύ >Kdg?z7| &#)3+민,2s9R>!9*XC~ S[qq7,!yq%a:z<\tunL h%$Ǥ]6f y[W` \roƐ%aޗ' B.-^ mQYd'xP2awEڊL|^ͣrZg7n͐AG%ʷr<>9 2W>h?z| (G>ClsXT(VIx$(J:*~CQpKۗgVKx*lJ3믫|S`<՛=JPBUGߩnX#;4ٛO2{Fݫr~Aw؍:-~|W0: =XVy#hE&q]GC/zE'`9ƭZ.=!@Q,}s=LN YlYd'Z;.['kb8_b|r wFuRI%T멸Ѭza\ߞ/2vw>- MR9z_Z[57x_hn|W/CWOuU%v[_((G yIi@'Pmz8^_`c46A>hPr0ι㦡 q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^r%/bG މG uIo1]ߔr TGGJ\B BR 2X\r=\YGVɂ?6jHSJ Jno#] gv6د>GD}c6  %T%St{+J{=v-}҅"o ']쌕|tOX8nJ*A*%J[T2pI1Je;s__[,Ҩ38_ь ͰM0ImIMiVJ4&jN'Bx)0v߁R[F)RH?uvͪ_5l *7h?aF_VxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mp+ھ*V I{a 0Ҟ>͏ ,ȣw§`Ee$Ə{(he NSfX1982THwnUC9fDx5X@O5ޔL3VQ 7,oT5/tMJ%\t=[ٹ:11:2`c J1bV_gɊ:+^V,~0{gj"A, rXr*0ngY.] <ʜ6 ;,9VPAHuŠ7կhw=m{> *nacԇ&~hb[n㉫k:%݌6od FT'BTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8fq CXReQP2$TbgK !)CG'ct Հ}a>-:(QxPyA Z ULJ- upƜ.4cY\[|Xsɾ7-<S1wg y &SL9qk;_OP> ,դjtah-juHvhd`N|ʣ)-iaE';_j/0xPA*1bv^JLj&DY3#-1*I+g8a@(*%kX{ Z;#es=mi_)qb㼁{buQ?zT u]68 QeC Hl @R SFZuU&uRz[2(A bP7k:Rǜ%V1Ȁ Z(Q:IZaP,MI6oiA>edCĥ6uOڀI dFF rF,:XXlw{$UYwS1dӧl 5Yp$'mZv"ꒄℬT ٪ȿ%j\WFI#R޸B4vOL-LIP E&4x0<]pK>UKkZ{qqio :íyFR1u)X9 fNU ~5׳batx|ELU:T'TtݭRj^-%[ R'l}jdX*kj1H`z8F5]We߷}J0TTƩ0RxSe=>/ ђ(9Uq EmFjq1bX]DןR24d 3[n )ܗKj/jUSsȕD $([LH%xa1yrO.|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧ"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc Ry+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGOfpEOkÈWȤMف lOc;SR&.w,qk>MPs+Xh4iyuGRd֞q鮺]m S{}]U kV0/ŜxtADx"Xh4|;XSxߵă@pE:y]/"(MCG`ʶϊGi+39#gNZYE:Qw9muB`9`LDhs4Ǩ9S`EkM{zB<˙ik; JD;;3!4 2Y.$Dwiu|+lO:k$]ԜYLUҞ6EmH>azʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$gv>cDp"'0޽5xCNQ1G2})*'>fC۝'*)"5.E2IeD 2.ZdrN6Uœ=n8D-9޵JKw5ُJ,􋃓ZUꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮yI3MڼH9iEG&V 'x`u.̀ab7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v;Fq(>/#4D$a# vz~LO:np{,Vڄ`[iUgMN8>V#ہll/ؽiA(|Xpn2]nr렇j>%YdE6c>Ql~J6J#`~!Eh3ŕS,|HVh7m]Q!ӥSVL l)vJ |0M>8l WIf|\8U*hг˙r,3l'^  [}r:gGt $*}aQ.Zi~GXdm$UPI8>^;Y. ,BLq~z&0o- ,BLq1x9y:924~ '1rg7.,^>Y`bUvK S x1Qz~%6k=LMF1lE"} Q\E)o}\"X.2< 5FB΢u.`aJ#T+aI`"Ǒm O\B!,ZDbjKP%9k/*iǘP,:4F%_"zJRQBk Xu|`ot.GSGd oEC>N:S Px„*ޯrޔ3>Uf1w;mCj!Ѧ/V{r&ȑ|X!|)NuUib0/ qg%HWc\4o>&lrwj|Lbw!)hG]/oWxwWWwnoIK(en\.oVV6n.Džօ\_ߋífz,/o/Z^]ݖrn.*upw pX\_ U-M_w ^b㖿 k[.;n{]ȶM_u)'/lf@ Vi`D~ڇAځQi5ʬLHuona_u` !>Z;I"9YPۊ}xb6Ia{ sckG,:\n%Ld="K-us0d/N#X.?_םh 2r-'OID3d ֺ"?yUAmE}|Gr@!&H2D+vVyKZt<c$k#xu;ʍ-`Zi3)|x!6!%<@fpѻK2Q31pxFP>TU?!$VQp7% ' c*K "U8V15> =҆#xɮDپ U`۸ہt=|X!~A:"W vzZ U{ Uѝ8A~!~\<}LY !!E%ҏɺ9VoiʂI-|)Ag#xoL\{n69:3/YgH]m-|kۗ~sG4bzgek#dNQVu7b<;dM-}nl"jec-R9~ {^'##AAwL !1`HbKV$n}VsXS~vWP~ RTMrQίE|S`poSOfb|RK> 1Gȩ f,( =X?D5Ir񾅶X/SR?|a|>lX9t>sgv"4z'Jp@8cRөcp2Rcc6:$[|038D*0-)ZyT10:U[tp޴}{>H3iJ6J#9@2vMZXc Ub+A_A3eORoOm R ˍ-~ά}hs\5TT%;}]3-- ٕ 3\ a-`slԵ怕e7ːزoW|A\Qu5̓?W~%"s%AC"cK>2d;WBTD\UlkF瘏Y4Ew`[xp,>9V"R1I>bJ` UL'5m1Ԥ:t69| 3QtU)!e7Ds~|h!.5b-EDz3M8V|&jZzˁX=nlì:8Vc1S!:Wh#ONm' E [%W%vlꠧ13܎y,< )i7 Ո: :>8) B gJ`3bcK/\Ct!s 2sO/I!}xVc~j\b<9$4kJ^آX'I^=zKٕ n`g. ' RG-m~yj[j_;3\弁^bD5-^〩:w}[ą8dBմVsrAJsT=~!0t.P*2V q%sor|.v\sfa:TX%;3Xl= ՈC.5Wg󵸊y!1U:pUC4Cm-7t]斻38ѮIWί_z7e&ӃCx-w+zDX)w>^ʮٹUg:lR@djӃab u[kWۗw{{7st08`J0U1|z:9dX)zS&QsTomDvU`tiz5Ӄ~ 5yx `iݗNE@Ubc@ ۾S6p{dMVwfay+*ɨ'w2MjکE%WwߎQneys=.E)7WwߋҗpgSmLa^/啾͕ܙoVVע,./kuXyT7v?9?t?Et _IHqOr4 &+Bs=8'kP=4hf 1h4i R8UXi;f;VoiBi~MefZ7F >:/?Ac 1M'I`z9$؆DT!u_z>p%'3JQK-͗}I+ϟYSUגTa>x!8 L򱷬4/m0NE(G-^䀕#{ӡ4i̎'QIc(<ǩJi lcmd_K(``g&׸q;MvSd3|")Yv촬$&DYTuuuuj ~r+41Z\Z$)Q6VI|ɎY{ SU6ޑKjc'40V(O* eQ&+mlƱ6j(H9\\Fk5+æ~s*8Wܞ~c~4Dȗ Ldz'u0VXMa=nZ1Vbp,Xx[ HYdP<n2VsarB3h)"'|teOG)L͘b Y=N%p'97?"q*Ǫdt^Z%eXJNhŜeF F ^Dzd9|YR9[^Tiر[7BfA/K*1[˒*chc񈛮xNdq7G͸k^Ef1Vc\n=(Z!zn~WgZS. AG%ò^EmQ`ț|a<XondQh@&Zd"|AmǮnY~d|´ {1w/>!n/LuO/NSy^>&s =>8! nhFLטQOdb5]^jy.A6BWQ`{?MR7/MRw/rn6kW'U3|tWX#0i t2>m5-,@ 秷,KG[nnz?UJ~C}*ĸĖDBE~W@7KBSѨ\/uoG+O,YSXj̍ȷA>ŤWBnAX,|@#"EUuaJFWC=NR܈iZttfl1ѴhZ ƛdB9/dB. ՂgT,*'R6)OLPM꥘T֤*D_Gwd.dYEzvYͪjֶ%oIq\7:G$$$9^鳵XYmiS7e}kJoB?nYO[ /NHXL@M`l7uW%Eh㙥i6(L~E_P-;O2++[GV+C;ZW_0]R !8iM [ßUXiۏO.HDlko"1":HT.jΓ7$ڣh\DMԷ۟~1x<6wmcwly j_׽&Z^V  _$9DmxG`oo??@^ :!Ѱu:oY1k72a)8*)#Axw$*/g7g|-1)ot,܅ yzoS#x:>}(?n[9Gpc+:@.,#]j{E".(1n|2[#n 9MW%2ANۢU$Iyc-BuҧG@h*ךbQp-W6MQ#h@,iYh\$K#C'S,Hvܓ%k.:>YZU[-$L`Jw㣓v캦a?g%ZU8DUs4C>hPZKxhV2W]tVlL8 fȅJ6ظٴ_Jh؞ {4ޮu/c% G4`'Jo>ˊ>v< & ihM6\[0 2w*hf*6c5&vU4WᙿqO9 vbZCC L%Rez#WI%˻.)W!Swv%U2vS p-ӜiDj?۳#G{+!Z[3-g A\xŒ퀌3DfELz`*Ѣ><DvN K8v\&eEdE"zZ˴vGnG׶NTw\䘾ܺj^cu kSuk' \bJ[2"UJ~FHlvORƲܺ`A#?&ZfۊXap6eEO[k="fMZgwe+l]@tn^0o%MTuʒ*+nZ]Jzə)G ߻*LCvTھIj E^m|C *7(c{V9Ze)6Zg=L{G 4`U^:fIV f%Ʈ*헓4\NF*ᚴs#rMpU!NwM%(0KІV, \UijdI jLײ\b ˄/-l+RSv'Zn 0vϠ58x˴}j)k)bFUs~ +dA3R3nj'̗hjº.ﴞ*hoѢM yģgY$[;/"m,Z~)>-9.-ܝp=ߥIȥwˇZ&hszs~f/cE2Ԍ7wI-umAr>ݺ+ ̾aC[P]y 1Y]֗Q-yXMR1]_Y%z*<´ x{ZJW@D`QRjI%ӾCi?Psb6Pʸ^Ӛ $_1iS-c u= Oc/T˺ #f v}5r-neVm .j mֽ\+ f׭mE|aSm=2,+)ڄK55&!`!TI,Zkʔ2w¹"u[@ڎ ɩɑʝ 6֖Xhh*O' B|jZnm5nڼXB_ ~)KKqcG>IFWMjڥj#0O ӝ6]@ۻ+/J;l&ǯXpX|>OdurI]cqdE2ږN{nSpVҠ*ڑ2ӪW4?pTUAF sjۣ>7춵iYP🫙 meOubjL}mY+Lq<ˏy`e;1WL#80cvZVy4|h4 pF[f%q h;Zu(PgDI&j+곬6­CۭVth␷ \) XEK>DU9U<=r$S-3Kixx]ʖ(Ӣ}rw"z**dj Du?@+z U$ѥnE ,{nJ2 X/+2`Z*bF!:ڪG%Hf:D$4;D?.4"q^swpy xx|^ ;3sqoQ|K#ڭ@Z.Tt4hyWxC Ʊ't4;Sn|TܣkURa`λ `;f/uj/sT6HȈ5:.M ;6u={%Dy{l9y$njj(:Se3wKj62"HWCBT uJUULve=ȱ2 _EmҒ4x ę$O;7VKrWDy\[6~euoiTpϭ 34XܝP|8"w'Z5 eyRD ޡ] %ܦ ܺ<|, ^CH q(00]IGhXwX|ݡ{<;Xs x Kv{Up 45C?bÖy.lB s%L1иNRy ʁ ahSAv+wm_!=-RE6}`6'iw/-YmmWwdqӔS/vM !c|:k%^qvΩj '|; T :0 xߟ3't"SeդK(# FeI5LD5)mZ4V=|P1KwNv&c_j/}N_KQ Ѕ%zVÄ,$P'7+W+_> ?gmUD&2U"˻pZ UGg4]vcn_ʸH|G~N 944&@2-WSم *g?CY/SP{Or3dȓ̠!HWò*+r}!b+1& ߖ8,G-[(U 0_w'G<;'O'o~>>1r 0ٲYM_~^O$M Ef)"~IF O&#%*pnv#s>U¼}6p 0Z y^j`|WN NuPɨz{$Aaw>M Pàx̢W碄2_&(9fK &e{D"j]Հjw@}&LG 'OM?8ݐ ?N JUIĥ{쪁? Tb;dUa8&&CgHptj4N7+B U)Ǵ\P Fᶼ[5@TM9ToWײ;=ӽz;ͮB7Mv@q?4Cɱb12o`>3.;/9XbpعA,M fޢΧsӲcM̱7Phh\Ÿzu"M=175N$Դȩ6p4N568ht:<Μ\۵S=N5,ũF`qzj vD?<=gyT33p(g}p|@lv 8vtdxw;̳2iwœipKceT4B .e_BHFm(Q\c0ݱøݿjtסlwP;_V*nQmR8Ŗ;Es{9ՠ8s` 7}[o^KY&yo5N))r$iIU`,gP:! 3HdJ]2 yQ ,,KA6$$! gX(-6+-Ym^ŃD/>bh] B8|b-y 0-Ȩ'܆`U*[ZދD")jLpҧsZО0: J4]nhUފnDru +Fu?ESi*-R5l֠ād/bĞ{OӢAL{&1N O'IaeZC{`18f`ӾK1.5ok3[G…U(] 2U>􁘌 s l56pGNa3Š7@7IK @ɑQϿ7 Eb0?<{ώ73sv")&Eqd(u[XPԘ\?BRi~f7RL(̲T ,J9q3M`0`Z?U`8ZxE'乥KeU# 1dMՇQvDкGV1=09]R! <$2["0$N.N-%9w!ٮy_Q9*E2%~9aDIf %e?2ʇrL4F-R"U\O>>} F:tygV{. x9n0m͂uzlg0| xHPiCَy%zS%S-8+%I}4?"0$'cP&QI4,cU ~Yu9^_\ yJRaѵ>\X$ ͝ULEj 6-nke4B' 6bqk0_f=yۓϿ&Y9Pka>9 0Kqu /]xd:+ y,D ,;6eM{Y=hG7_ɇ_NQ|zx:}ꛭ^>0'CzN߿O5Դ=B.q\ep}ioh"Mp"QgQwkЍ\ E*0ci|f bK8$'Pes+-fCP{|d<9tf4NFAi^'x0&jtV՚i# vGr$oaОfHez5ɿòyh*p2=P{TG}d>6ԆJi@alrzN(kP֢2sZZi@;QnmnI3n%5] _@Y83GS4)Y vmp$ 3,>q6E/~ F!t{`%1 FPʚ ڢ~1b.k/ޅڦ}T-}ա( g^ӳ6c$;qG+o>_~}{ȃ֟乷ڷGFޞ%, b3w;d7Ylt3 dOU~[*c>eA#+|,J>0Nf[Z;Y]U1i XwsyO'8PgQuݣW=9ÊzY֠-iֽhҀeB^![]#(]n!(^PTQP+]JSЩt^JՎݪgUv׾Umz-K˶m/+MT()BPkYPk AvRBP{YP{ AwVBPgYPg AuvQBP,(BP|7A| AeA-uMPWQPw AeA-MPOQPo AeA-MP_QP ĶqHX[H P,ˈcUug8gYU_y:S=Lܦe^t NrnD9pOBJc$#I!jx)}쬕ۡ}Ulk@>qӧv;fe=~mLj5 ,)>нg 0-2!l~/2/Mof9&&z{&yàh0?O+'lG.`/oʻpz5c3NIB1`Z*ȑyK2ß8 2y1t6{ C[+GSGu74%RNH)L=̷v:]u}cf'E|g?hF!4(eI44ߏxF`NQM@q%_J!O7̴?~:[tx ON(D8,cYBOݭWu+M6ʋ(~x1c̟A6Wb9M2IpC e9Y & `R7ПYcHK5[FBAu}姛0㹐?Pځ۠DsfD<&T.9`2"a:rG ltG8}[$LGpw;CxfʀYLJI> (e >3JV;K%\ 1իΆL{=3Ǖn kΪYQq>l獺hrK#aWj, gu9eVU&VQk/p#⌆"Ad*E,i1“B"Ui Leڔ3V!9p&7 i@鴁[4W &h]PMLXhj?qt[~q݇=u:bBeCPw UmeBI4J*VpŐ))(#}A2l̀$__]2#[\m@˟`[>/ wHFX#ETw;_yg9sX{_jRxOʷoMKyu×?8:d=nGn4̦8}2̾LGZ8Zf\a)s։}NXĤg_ 3-IBo ֝m3qn2qOD1QMW:#>33h_YJS^mp[_$MՍNRdcRh`"Uq)ĴwQ|1!isqN RtaK}d"#].g蕕Rkn{]UfN׃{c+0S~867mYN?@vr|tz-TW14k'$鋅J\V"ZřvZbl"sqh-YI& 媎4L)n-vƸ.q& Y݋uGBU7~-R#4ǻgN[oꛇ2EӁi%ySV>_uBuSb*>8}`N3JsMp'T S#Ye \ؠbX%:j%FqкdEB3\0mE5rc"|XWaRccLw6$ fk5l̕asu1h#M;> rG|RI]ioGr+|j!1>%}J(&)a"d\; $ 5S|auw<\K:{/]zbӟ@66}g4P|W|E#Ixî)fg>yV7mLyY ]o'"&!?#o;/l=+&6ÛHڤR-Z7; %-H4f9 ?2<9(ȕ3vI GiF0lZt b,V;x*޾ 2h. C } vn!xv̇*  J QMB@Jl.^Xe>U[`#(UFřCf|2 '8N( % yB< l"G!֚V2 +|ը'JC̒4Kє]pcqsD3\D@N 2*D~kJ%B6qI3)L+n luPI9XVk4(_ %-ks|ⲂuZk=l,9|I`5/I(p)d`zJ4wc0Vuz}cS)s(d`ׄ}}bqwv^ƍ\{MJ \Yȁ#jՠ5kUPMG' ^ʖ<3l˦צyN >ke ^:G7G-QZ:.MFCAh=CPt ǻ&=3lpLgmm^9+6]B)1JtXT>I3:byt16 !((SM(,H! N@?ݶLⴒM{+i5-{5fzѹj*@vqdIBnf5K뭥rG$-fgBn k}VZd!eQ6d9KkECBb0xV#TQPI[ f/T4Ht "3qRn3$7% ッs^*3kU-}Y- VVp4'U>Rǚ~|삣2'MljIw=Hzc.QoIe' rjoI)gI _лӊN )—7xѿR)4 ZZS`r6(u3R)%D)BluتԵjEza;&ZýQ4<2k.ˌ=(dz;>p n-ٷC>̌NB肣}Li؅'=iLv̯Iyw5*k:Pt% 23XncQy 츎ɥfPC9Vn\FDNxnDÉve^uc|*9'>&{Ѥn^.o=&+9JRe[]C5kd.N7؁H~X)\낣s'(2V> vk'r&ŌM`ڂPeo 5J-vuJUM<_Rsꤶ/rdםbUn,[ǶVAqVFdKHJt0U='6'OZB)/q מ;oGxsG~O}۟-%yͳI{gDt 0)b4^F7 g3`{"UN'.Q+tä }])gggԲ RdTb 4~і{uVMy߆f,TYR?9C7_zY//.uB3EPںKlx Dm x.vY,٤to9o|_ O? \~]g}?b.pS{\Y2Я޺Acퟳ?j P^>~Բ*ȉ?l/r_gxvWǯϲjryMYݯRyz_feF><ⷛjEW*O/lYa^6oϿR܂O_. ݿ:WO3jn KiL!P>#t[/ NYvo@˂n|{Fe5+K哆h[H}SH}bzw_="s NJTOɰ\Z 7aB2o.b>;/b}We 9'uQ<vG|-Y1!&sŝw\[lu^ĽW`؁ΔiE:6 Ǧ<5U^ΠoہP3j ؁3V%7MO9ղ_LMԲ]-Z|avo1}lJ mRʮ =cĄ樗8nnK]ptde .F~r?BЄL鳠yZɎ8cz6"t}p6cq̞L;vp*Xo2𷘀)Jx/?vb Y~n;v |^S7[z-Iػ@TжEoG7 8ךZCoЙpT#U"t\XsRbS .ֽ7.<0g`V W. e,Wd.I {R=$ҮUhw J0 R+eHOjT$i;:^]YI0ǫ.F5= s38q,;-BEw<مԉJbdYtd0"n9İvg:mwٺ< EҴcjE<\B}Ȁ l s?l+ 8?XCq2b( ; x%'W*AL ] RAh(u4j"c'ҟxP$vqpñ9g$;D;)52GQ#M4}p:RQk B^CJ,&SK~' CRFx].?%)q'D QJ,M, IJwEZ?#@̺KB ^}lk#%Mhhk"YHjP"(d9KkM(d',댦nSӇ5j}u??8 ld \w rgb|FN 4Ysb"B n5Pn{N&;j@X}rRyV`ikS\mX4m QF#Wΐ$Ȫwx+R kZ?Olp63RuH+ n8b?2a'lp_ޱc&(F#ؚߘ|ܐ9̸'w铚7?),*<(smi,%p!HA\ssR)\Jd*ZBamig|QF{4Bej{۸_voeEQInopAд{Yj$'H%ȣ5IyFu4%:P~ZűVpOz `sAnxi'CLOi[Vϧ!'[D(J/~g0)%ncT~g.Պ+H"lՒiԩXiP+]_/i8R$;5{:Xo#tJ~K/)  4YyB=6E0A=nVuR,o {6=2%B?/!l'3 {`9HNRH*}4Ӈ/}O>8ˮw5gne[k[ 40:UslSvyk'b= ıAHI؞Qlz|׺DIڢ8Ѭo Njv"\~Դ 3*V s gg+0c›Ű(sC8$MCųBQEQ =#61̂KTԤ2 RVsmnש?VB'qj;S;LNs@Fvr}6sIin@3a"[F)s]_RZ\lGu/Z.a>N|5z"R$LͯaP&=k0#sVK # ^ub; f˞Wp*ixYze~)Is4~dҡ+W2Lƞ;hoSpQ[GKWʼ834hdQ߯/˿$?# l#l0 ǁ}_96E>=5dr;Tj4#H)ѝ;< 6]IADљ06 7{MW v$Zv$]%; &hR.8R[F4B`RqܭCq:*rl6%/ĕ҆85ṁ q]3iM@"&:E НM?8F;$>>RS[YJ$Co /y1-O>a2񡔥ئ~Ҏ/?@X_TVt@_v{\/Nr<`T4*,khRT&[Nް,VHogw?ɳW7Xv۟JQ1+3j^o߳rXv>.PtpʸsFJGvNyUNa+ٷjiסM3_{EoLL%Y!LYpfóHcEñ;cW91Za6WnsR2+ϨC"]Yj&e+S`%;n3 #A,ՈI(^ Tx.oCboM5NEYG֒[D0NP!32cfdGGu{Fm)eOioXž&vfHa<+[3h@{hcv{p }#Lps1 0 ݆ӱ>V]kpx`taGGJ23:1g{q:C6oxH2ڡ:T%+i m"\>?woǘnExrQU"=RLZ4%h.o3E_*b<vZŔpItK-u`caC}g5*AqH KGe<njQ}@-ArAxN~euw/&r-5b;e:^N`|^XRxt*x0+L_^_(N /Z#-%u 0MA\֙5G<"0~) dƃCj8Way kX'x\Z1L]בb,ٖG wNo! z}()ͪpOQ1̭g?M{?&siryNh:ACYLTBcDD&  >hbE fJWfT4'Y>rs6^8\ ta%@5kZY]{irq 4C5ckF&eJ)&FYƙcsE2","FajO3NrJ^[3vH3vpEkckƦNr  o|f Z¥Gy.seQاY&P9iG4i 4kz@g3Ж)lK`Nh$O$Xi Lj[5㌆7 v0) bg SC],3Ԡ`Ɂpӎ+։ۈ)N`A9䷝=no+i)-TM2 Hk6#eCJp2K; *?#К ?)a'Khd7>_,$umd[YD:).=YFwf_xoiA>=އhRUVfV0W-^.aHχ30$OU=a[䧗Y^(bS4s?6er9Kc}/ 8L@JQtbZFP+ò`42)^͸ٵ_R`]U8ɋ$D Kp:@4o0JQۯO& y`οO ߡHbXko`zX=ْKX``d'sbWO/fHp=?Ǘɷ~w8x$476r}/W CUaOz5T1EvjjqMI:P5M;4J3vIU47%4TML\rS̭Z6R5`"v Wk!]KoZ`h-zrͭkWSpXV'$߻-&ҺBSʨhMBtV:LIՎ SjJ98dZlqTG/? > R2LcE>%Tw(%3) ]IɀƕHɀ(<플9.7Uok݋j圃9H~)&T#,[,KfMJc[) /?_>g%&)P dM Phr94#?@><#4UÚ{)lhE*B#icsL Vg\;@DLY9ə h@Ψk;f{AxEVFLŢ[ƨުnLHK!AS>d`GK}*qF;uT$tUDÕ@ .k\FԶ7=7jwxH z-m7tHL  IʞZmRLd"T:02f8!4YtFM`a% Rv"Ml*60UHb bIB̤3W`;m`*"$an|8zU{1bM!;mL!)ncvqqXUnSTV!뼡2 VK,(*H06BxNN-v;6~GfKxԿADOP:E.vǽz2T&}Kg$Rp>; !;`o_h3/GxL'\^̽RLB0"{DmS BŜ/d S#(# im<3,Bz6EII8Sm3Ω,A6Ldj_|Y'c pa s6ZHCڈs J'ƻ(68L,ӎXaNJ9L2 /pa`<zj:/-V²qDj|J0\) *SH:V)YRL,ڸc={oƓ_J~+m4) \KE]Q7KAɒM)Frfv;;hd;,,ױe\X) J; HlPbu^;CVV5B 6#v'T _mA%$11,c.Qfb&Lw$iߘb8)dtHrؓm*; !'Q0ć4# !o"P#pqpέ2pJi# )RJ.7J7Q^.WZJF"8 dvEr)&n"$5J ^默%JA`ck4c W,)iBCSB7S@HtߠҎ$&1J`'"P tpkNDH%%JDJ2iw%N#Xu=cZ* >5<`-b*h>r" !"9!Q=ۦ,<>jd(rhI~u4KsT"~jdJ+*OM2ۼM.؞:k} +Q[S:.M10ꩅxm:EǽnS 񖴁DR1 ;}؅roQtunR*b)Gi'$6Mm(NʯNJ":zXfN"o; pA F)f" $AƂfӭS/e-g_jA+4i,M$RI.#N PŬvDXvbr5𦒩},) 34 ܄s qif/z+3`_۹?6WK&2Mf᳽H Q:70oyH:ZSE0]ZzrF}ϯb|ʭD:pQJvXGČqDdqDe[m|3qTv`+)gA#&ZIMEwHml g* c9I-0*11 %=U M(oH*!RY'EpH9Y 7Ն`Y&Tj'S1aqL A!8$0#RZ ۘ )(c1 }RkBmnV$R"`-g F0̾:|RC.u #GAbF7 VD0RTt6O8IkA|.@VF_VǛj7) !}+̀i!M=pۮ4-9+] `f vWi3X ,W@ZF^ JN٩&%ZGhz+2zDr&'Όg7G#!鋡]C%UCWԒ9qbX'lp=H3-RɮzFPQ}B4dK%9 qܗXh?iTo>vkz`0`߅$3ąl{`-%-;A-o [ܠHA1nВ#KMP` .ɝXȝW2OrV򕳂"8ce: )ZZ`r:fSjiG#DPWt aL ȓJ\FyȲy!Xx,\Y/.&E3'ri~h}(nc*}KaX(Ruy} Xa0֤0 }1;>3lMf5v΂e^0p09(,[t`Mg.4 ޼h2 l7gˑ/B-/z>8=gq߹Y~ S RoxVIRڟ'JU#\!V|Q V:` "ڏVkL[Q\n1x GX$Bⴗi:DZ"^ s}x cbHh_ ւŻb[N!ov||}ON #|8e&'Zsk}K8\zR^ߨ)97{ݫG*:E#?tx:Ώ`n&7.+<{ރ. ^a<7^1 i6r7hd z"A/fNn.y ܠ9lGWIkLPp8+Oߚ)S~M"V^|t9}{E.h@9K{ỐK%Y率&T3DA0OCLX}c3zM 3P%N@ ohߓ(LDA~ۭ*ißAW&8)FFgub׮9[=J*@e$GJi{eێ87[Q^s/SUy_2roH A Fy=jktR:uuP'j-R zEMe=+Q˩Z*qۿ>'*ѺKe\@mN/ cҒ%C{=x3-HK'N}uml-R UT֙ Pk,Z.= n;yX.J1i bHPiQrb-ϋli> 92WLWӑ_-f ;k1?0wb08nqW{JS*#}(kwC"Ýjo˱X얺PkP1vbr]bPy ʺQZk6YtYFxDɭ}Sux2: bOFg7|NF{>|]BFg.c[2&A "% a +n(at<] є@S.&l:+wfviR~.-Y0&0ی*=8S|épx9<gM9Arय़[{ŏ8Teˡk! R4\ FO+gRD>ú^e vPqoW}V)ypN B4aR9+LpR:?&aR[(/gڠ70 :v/1ˋf?z˿E9e/>xrf'c7XB./VP$f\g8=1 $g9/` &c1 gqRs37 ZM❯IDfjx5hpej`˙LЌA@2:fIs9Wb>sfD0o 3_GdZoc?xZLuN-tFMϕzD=Пs 妃拢G #<Ⱦ./2-P"ϕz[\f8sAQP=\yt?\X qt-^@E<_dٙo~`x0'#O,*@4XTme?Q&đQdCߋ,"`S5a7a^pQ̻ !S]>,2&aXU;MT fOUˆP :A!4 "&!$6&FJ}liŸ\ğzrū+r7Ќ}G\vw>Dhlهd߂l5rW?6ZYJ zyϊŽ$w0 >nշ }ɖNn( z(]Zdww$2J͍W|kq1jƗЯ bpd U`b"4a9+)KmD yuD*Z ?exjp/|g!5uLh0h2@d~'xt~8w2NcǓ]87cys<#k7`mXgm>+N}17OLIy}pX[jjmF+a/eB/m=m}pc2^ Be5VBf4 :(Ϧ%|oϫQCϕ ˽lpru7t%Л7q۟Z.pt8q5} \n-W5 3.R*A,SgW½Ei ti7kÏbn)/jG?j.^/ko<@ k,j4ت,~Z%e i?o6suit4א+\5.v홼n~T Km4{*`#>>$슜eݔ1N2Jx#X$4S-RH@h y_[=?ċ@\(ೖd-5iwMp")rC[ۼZx˯KHtL~`M1mT줏aՐ>yafu|GU*D,OU>:+BGo.ty_( of5@_>ΖIt(9Aa7SqS-o'Xwf?~8ٶo{o<0yy^ڲ8iUy1JD)ccFPo5(c\DV1wA|{Ol{[jhܣ^EܦGft7;EGRpӃVm)U"nS~ ɚ5KelyhwzƩQwDs0 qwМ4A'_zlA̷`PsQ;n>5"n5_| H֊vM7kUZf#x=v qʚo90ɸ`ee d FƧGd5s.Z2bl|JZr}'f膘q=p([* 8Ɛ%%g%x|\(/ZF톉0QXy!vN^a4Y^<(=ɜ yO3V+(򾟉/.ɛp2DŽ:طG;=<-“@VH $h/D&H;cx4D#+rgw-Iע9|'|OHm$]&V%oMj.fh.t݀۞_uV_~X}"i~=K« 10R:@U2ӆf'T!'k1#F7Ō")fw;HR) (pR"ZJM sKd\@ l@7'B7@ఔwz߲%Ԭ"+T=hQVO!$=e}^=*fPC8Bʎu^c"4!`n:z\%S^{ՌhugHPSѼ"+r9)V% e_]ha6"+]K$]w^SNhe "QS)z- 'r ~~_9|U5\%nP0c\DO-zCުG#{x=U$-O?oa7T~X|mIKMl|s}􈧖?`_XSs1.cFܘefqoF0S4#B!Iyiq ^Zډ!ֳ@h۸A,|{!c\6cuH@F ~!m QLN Gpާcp!I>f)e=)mi- Cܚ^O}m9't'6U2e/3aQ&3xO:8Ðwt*F/\}ѥZJBZvFzkQ ⭄UtͶ{ΫzYrzm*ÂRq &[׿e?qJYcNξMQkY*![S64XQ-!;zm]P~v?f:M* f*`ެ4a[Lމ)g͔_y5-tRP.Zr55|GL0A ?;70̙bbցMjOŌnEcK7^5p pt1OR,r&#T\<g,&`;C5JzXN h^yx&`)Ks#9{(ƻH:` 1iW,-PnĆĆ"+r=\`a5 u;dv펏G?}piue Ԇъu)4^1  Ya]=NU'|_džY/$­dOO-||_,ɜϸy#<ȁCxZn&WuC|Dt!H j#86:.3hr$.,n*#I˜SmQ`srJYOvl8KƠ) 6f/A _|)uHwN({}1yEW8b4s9&\P9>(µ0xOv~vVw[p`ؤU=47$&gŋbx!ҕlm|_:Ĥ1Ԉٙg!31-04LO7+=ߌT5cP@oGF|DW1}G#pUUMCs2DŽ gÔYW޼#>ZgOyysF >Ҩ$Ww>jVxauۡl]G ö>aPfVLHQ;MEW#z=B9&xlĀiXDm%L8f> Fo77qdpsC69yd Gtd'fՄkU?E)1`4|QT'9Aɠ4U7eUYx-Ӑ->>U9$G3>")rƃ/r\SW%VBT֖*b9jdD("ooE :PCXD+ߒ:HI++y_n˻L)㸄+cK*HijXc.I7e8EsJHL'o?0!Pk%8z9[DxͣgQHumfmy_ =h% 2JM3 @?:S-)Q,aBV\pm܉IuSUfSUyOkjuC,-sLe|8xo$o}%ŧ1 ))ZugUe 6)V &fZjؔ|T;>fhӀJw+:GO4k 63*/ ΞOlTȦlQekl۽<[k4nJphVTiGN'XIJ3]wVpݲi%fʁWĂ /ҡl 1eppnQ:Bp|9R1Tęzc̍\)F:2 )a ~ D98-sLvJz]n!'OOF-aKTV3v׸,6PݷbۛgT!}niGJ'xZjHsk'dzLZz"QGcbgψpFSVH%E[z9⼉-fEyOJ6F]n,c'=s sCuaiSҴEZ樽/"Ǒ8_;yÀm ^IO^UY=f=duLY<Ev%KbN0#GDd׊S(QP _n1Zf+^?ǀa +5cOo!eY9Ftp_E NT捈"Vټm1ꂡXaYm0qY~gL1a+Ur}0K/46^'xTgN/b_^urAu%o1&V|o~] UA"qWRT$W1A q5_Wȯ9yaø4$vEbZ 'uJjj`5xW4/\p];cOkE_CpMrXN%"W)cJx=c4w"ÑFAHFEbc)7FCz_Kayc 4Zx QL[4eujFV^ LlM͏V۸|O*@Jp{]G0–S."(0hE<=L U@|xރx?Mcm((iRJ壣`\^4{|Leݟb0d |4E~Nk-<k:ETfz>Yf9>ۢ'O&xQJ^iK~r-"n0b=14$]LȠ\I ;#d1sZJ)+)=iK ;?^ZZLG_|OebOҧpciG5hj^pPc:?.Xs1~:B<>1&Ia?&jFdMWIwxdR ̤@_+fߵB{0`]ШP]6nX.FHe.ĸAvXHm,*26Xtd765fj4Z|\Kk1dKS5,tPv!5'ajnLIHt`OKgq=ė0 U |z;?p} ;^ WP*x PX$mL0b)"Vs ĪCeQa98  R%o"+,.( Nbe;0K*sX} .-A*+y=.m؅o7Bba"#tӊC8%!WoC7{E 7p}^:-gT1cRi4o$p.k4yҡ8fe*WLA:q ?병QQE7#I!I&ma 5/qMΚ:DѼQ׊|`Rx~b4` 'ɀ<XAcO wǎ5ͦ&8 V*+ڀVpۂ.Jr0 D|yȯ:Tfˎ$],PO1rH9$9sW-SNJJ1mIzq`ڄB_+w!zYbCPTz[r06tn3t"RĶGKPre+|叶C( :mYYfSdeqec[Y"VÊhANCLyHǐEzgh$^ʅN?d| 9UP8-< g\4ϓyw|:#$嚒2$*PbeYK89bC48Z{Ծ^,W"Txs>kk.^Kx\ rJYEIvÅb&OɸwI@e5>$Frq0٘&G؃* Bu|EsEëTǸ gZ?\ D-inTΞG_ Bvݿ,dQZx I}6)${*fx qgⴘ{JB.Y [|W b{-<|(qF̞~<ȫk3㵠6sp⊇B`%́7^getm<-HUA&+X9r_>f.ROIqU5 OxJS*ÀrQ3pplO >pB+g ґV.-T}nh ]m"_NN=:KuR=q*p?յ/>=}%zeӒן? UI64F N +.zl+3NQg^3}-lnç,K {8)7_ѩ{鬌d\i=ۗ|{Va} V}_aOmR6ޣԆH'J;cC)8r4tҐwJQpo+$PDh%P = G}e<>N-J8 '8ARV(, Vcy,}1۾qHA,'KK>q뮿:(l3h?_?B:bkO$*foTD ,^:\)cGo%J(DDc16$’ J^tt6Y~ ~į$P8:\|I0Hea)1ǚGgLAO9V.)BU`H*Zu$ܢJʂBa0EWtH|\;u7)fĭPĭ +)>>H9ƒx2qѪD+1mڱ~O[_[RA5i`9*IVf쭺Pmj2m$ʌWxQMI iYZ1.Ce XKLu4Z~[%[xrҨӹX%V)nD'mKMkr{3V}(bx]hbぃhcJ" X㖗LX9f[w.!nS0vTӢH#GH_EP` DYU} 0#o"n[MSޓvΨWw緇Vݝ^oFqOO=nrFxQݨ?[KQk~<)Pc%hW0}:S}7<ޏacy9iOvU/9irj.WT_EiA.`Z>h,nT$?c;9.F0q aK'PF"q?Y/=[TNIao9ߒ859}ZgoOou]ߡn=?>g* 2F;1mF.obs~ۢ .. .7oY~, 1]N#m{e *tzF ȴ=T^ap7fݵ!^z:))K% 2AiR,&tZOl;(:?Zeޘ$~;CVӸeƅx6v.EDrZ:pE ?=lfVgJEmPϜdk%b8l"nS(Ϫ6b\jH)HoU*puV_OŌyS&?x2Ce8*mj5;a۴&[ 7sJi2+;@hŴ+re+>ƩCs02-n+ʘl*\n5wDVVy(6&ۮٿ<Ë!ЃuYݾ<Ƌ!q OHJik&$_1bKf"xV4:PzD61Xq6k=Q;/B17Bun 븰pT6mJaxZ+aq_W< n4nxMB؇1|]QgpM4t-ྃq Su T^ғ́!؆?Ϗ"[ W[PTP$T0+iM\ɕ)ČUxNf `fGX2{x 3IQZ<;^཈uvm-.̖i(wƩ@p99dk*5!Pk%?%N׼0)cS?O O*6Σ&XL&97"'|xU\hc1whabfH.D+<W)ĩIעe#j2#pj6GWKOO樦J(P8>SFWٟOO&&cOI "Y,7Qf9#&peRu4ӸF(㧚Lᬫ ј-\mD. [imD3 F']ʒvSh/OL Oϓ Y}= 30m:mJvP1QI{xz=ޫ>rWԛmۦb쑻-\(ϧQXQ/_kFWkNO&VӨ 6\eHB%VQQ[*;.3,ZRoB~9m-q3baJ^wgbg\%`{W#)\p}|G>:x@6VӸ˗w Ou|V$d m2!n Ev21bNW(R`N[C*ռ=|Eۙb\M[96yƘ |1qȾ5u[3}eB J_Fw?6=~+)Ft7OaލK%1@IڼdO}jԯ=VL M[㖉ѐ( {&fl+xpB4s50"'"Qx@ o!oi~FAhLK! CzuGw ?HVQ2NA+xa)R2HaK Wn9ktqY {8txBɷ)ը=znwiJq+*O97.܎{#=;E򝱝b]6D'If%U5`\"$SYz鈳=z6JK󵕖b3Aj4Џ\P((+|k7~ԇPgV ʉJә{7#0L%tGeY3]T*좚6~̑"y3vQ`./@p<%&߭뒗h^w|4h hLC ܎B)]iV\8R2s!:߻Yvm r-nݣox.=odtBN֬2,ճeY ݙ~g`4<5_kƍFmk]{S]mhOٲ Evj~74 b9?t/M1'{5tv-/u6W_"fT]![7*G~5ew0 (Mufձ%. .`d|q5Ya,᱃|\}|nu0z{X-0UU͗/0O.u_%0<<zt_ UMpSV(bWBՖ"Yqtt@!_SaƋH40LjfJU嵍TZ.Yd6bQ4e<#Vpo!+Y㷝~^PǦ(,@^T2z5fv tJp0T0j$7C[c:z tkfD˜{{_<010tʩ~?@*ذg¸z}~T`[Ä,J!Ш" Ma}/-b-jGSE,UP2 :5$k :IN$,˿b/W˿RBIPgq7g}Z| ^:(z] ]Q~:|kaw/#%  ޹])%b&|d$anT'wHWhR{c#@\LJ\\H\M|%8M@c%4E͵|pvTA@sH8IHICI"Diac(5곡zv6܊s1sl8Wrn!0@0?lmi4⻅7ɳ{@*Svv+KJk]PADV$eO| juo|mo pt`D'ܣQULv4\ƫ95t&h QFĹLwR+Sq~%8K!`u/bc p?*k+1n~"V"QV/FͅN9kH8..V%.' F*]a[\ՌIxX_|<L=L8}߽3|]\9 usŊO.@8 HfXhݭ^ߐFސn CnN RhăY]=:ʸ@)(%YH YnO⪳bypm".:-RܙV9SN<H)5,h A֤Zw~ ptoMBDBᆪa'֖$ [aU>޻@X6h \$6m'l:z:y~4_tudD]٘[3`,;S=g _0U{rjR'yV5MYyaC7F7P?g7)?+}8`CQ'͢Me5znK"dD>LM`G Ȍ>8MeVǗ''c;CuJc2kJD֦tr~Cb8Ih*WTШrNGI[S=+1I y-EB`B(In&EHlHJ/r*JR?3b{|vf`S)ml ǡݥ!%k# WD9pXo8IBwi )$3T6wd?ČۆcTף諓.il:uoOcpďѩIPC8s6{;Tӻ'mɥe?C(&-qH?,o'g JC+oNS~4nDq_E5`R 6>Tco>gͭqasٷӍ0]18#R*c|a'džN0ݽ!GBTB{爺D~!tqXr]rǯm6Ы3_k6h `sg1^Jʂ&]ZLUnvvR] P[bF0. x;PcC[ ,P V1 `SKۍ-!|qQ;a!&5t{~4QrBm_֣N.CK3 A@jd|A0cGNEs^3L p,G4gR3 .F`SSJ=V8fWΡ뽺u5͍i~M i|9橢Fт#4P9 .) a<Š𠪘7̊>ˠ*&e '"[p:R,ΡZ܄6PQq3u> \sr򶂓fb*&mzoxiPraK0NX.ʈlk?ȸWC'7"bRS88;rrQ*_iˑ/@<d zQXTc;цoYA6(uWF".yh2R׫xl~]3*bn ǁ26bB! Ăaήg:Duu٦8KE IB)q|[ \$] N9V_!)uQYDݯ6 De5ָ:8p x8v۠)$pC幓̠C٪B^H~hd.PFG))Kb2Dtq/?%S4 $U+Oٴ@fd:Hcu³ jT6XN@Jc&J ;]Fu't;gDcI4UӲŝk6^wpKgI.P6tb̀KO֠YgSw_.hdwnKRhdGyI8zmztiV*C)ir$uc'XL!Ɠy)MJӏJ&TXFfphܻIѤ 42Hk0 1)so6v¼TcR3#!8hYŏ<)O_IKhdtjxʬ(m"ƦeJ°6Y XfK6-9>v%&VgMUPrI'ɚ )1({o/idu|Zh8s]܈x”N0DzU%p18 DͰx%hs'9^zo$2SxRAd]-;2Ձ>ګI Cz,^6ERNahd;\- Vӂ# p%qN,i`s H2u\35 ;F X@0I:(/#dGhXt{?t5pakUf3AZIJRWG(.&^X|ɹEM2cA,2quZ4pQ~̐t+ɬ+ϒmkx6G62o?3.p\mNRjL- @5i3i܌gmN8kHz)>x^&ȗY2QTQQ$`70;ofa, ̞iUpcGMD2cL^k^KyQkS{;b^ނS#&`ދX -&=cP@ćݤKMʺߏ{_Nr)XgB [խ$g tpԳ[I[؃x=2N/G-  o+OZ޶O?-s70[2Ï%=Q@SdeO/gʀ:~d?V{e jNOЇI5Nz׽x뗧Ι"p^yuLC~506 IۋpMfWB;U*řWg)tbgwg为sTCn5d=F&q-K[HæDM= u JÚrHJfBކujZm5iۋcpT#٧3~'OT\:Lzi"wpYdNUw"#I>6z$cr+A5XVw0G|2z0/oڋ1CWGBs;st ug0;+D@q+Ç:q PUĈ15>ws7MKV_G̽g>ø`/V5< k?M'PNs;]-~14hGÇ^lpCנ_Dcf{=j~fU rayZ+ϟ͎5Jװ`/ `F݌grx;mBZuOkd&5߼8 EO0ڷ+g/uca{^6>ϑn,p{>qSЁ߀9U]Z*@@YaZ0/6L,)Y+kϼQ.;w?;6Dp YSwxXv˼~<N0X#݌`<0M.x0(: w~17ƠmOlZZ/mҋuhZlkGlSpga?-x4o՛/TZq^:]/Fo|v3^SsV@֯F S}xf>Ynf6qݰGKּb U[Rk ZuäZ6؎uqAwg,E >XޏyjZ8 ,o}@c)[Y5v.:7iP?(ΡkCUl4tշqA[\y0 e6(Boe dwj{K#HzYk 8@Jܵc?t;T `ck]?6O]:jMB13X߷a6bޝqLXL=0-zQY0ᰖ &0)K/7֛XX@ @_p57p:\Bv,eHDuEt :>!} ʚ +atpL8oRcXa<tYK57Hъu⾂ĕ ŨѣQ#~ ?DWwC7h%l۾ 41&`-Và~0Tvz>Plo?ň7qLw^ ٦A0.̟¸N)u :q¸Na\} :q¸Na\me ٪ԓKr(4srU\4fe 6)0@EK5Pu"`,h6) f"mDjcsphzS1;3ksK9kv64[R7vl&5SRVX< KO Afy.pZ+ 8Ko -)Ru[KAG+b^Ze27yġw pLOPFyN(覊`: -G'xG\DÜmŃNJjj=s`Ǟ $,vWu1n僐$6?Z7Ϟ+N)KY |kAc*`?{Wܶ /9נf_T媣#;(<ĺY%p I)V~UE2e fAw==a&i^+.2fD+N!Q4+HXG9,Rb@3 IxjZFe)A(5"֕ƺs@VS^ SJ'ә( `*\bA I%pБq=`_Px#::C3U RYU4c NQB6xјĺMKc5W^V/W`PDc/I##6W/-6yzp\ͧxHܫDI#JaVT0A`JHmVXdŜc)sZewi.2> <ݺ^<\kX|]΍fQY RQju-y%s*F"-Gm )`"XafϚqL^\"Ay & :P},4P薼 /ϗ6RQYw!C`S-Vf9p!EΩe*bxT\2#jYĿR(U[EjVR9=CEro"):#DJP8/J8ߔ/yoK:Jk&3E]q.ϳ_')LR^xMB/ ŢIfgmҋێoGaiX-pvzRDj<hmf>#BϲitLHq(IU݇|ڸ;4!Lc.CR$/F"^ȯ~$QIq)Å Q O#˒vOa^5i;HnEAYx,.AG/m7\v$HCdF]oNEStI"^,+{v3O 5`2XՋ֍-ʖbQ(ZT:+/nRr'`7Ncw;ðFYAu/ʙz1K% hv^ za!T-1~]3kY3PCF YL&p8&zhN NV lu֎U(@%iHj8aむlx̚+nrS/n6C뎮@姳?~Hg'}ɇ LN/>]b/xH;]{w:;MxMfMSviZ69jwiW79v Qz}z2ﻓ2tCٕd)5bfe7Ͼ5p1N S8C?40? FblUH&nNRx?4$pP盒!XI0l']p"K/MS|y=xYz* gC*Tfi60-+cC=!r^JK7oZFQ:Т =2L@E΅ Q%5mvr{O QU a⃩\{,prt:qΑҠNJ@ <w9a{:liҀnUECßN.b)_Uބ~J3/]#f-roY`s9F8conSN=5UOhwhB$eP{Jc&0O"c$OvAcQ 7c1F1AX8S`2&mob.ObV VcJ#68R$#?g)Oc.*5X@U łAiD%\[}bhc`5w$&_8oaDbB6 ]a09Z+s̪M]1pQ ĦS#gmI 9ZT:;4a@l ُx~aM^y2v.-ٌxb= פBA &uD9hkc:M0&[mVijn\-jAAbb5LƯ -/- ~R( dsk,QW+L@@Bz1O#y (r 8M(rnS<^GEd\U:aREbD hlE9qMp:1}+,m뛀]=F_qhi7nⰍ8Lb4?gbJ /\qRZ%C\6x LqY̙C:<NcʠZhM ֫K53LLryå O=cF@1gz4‚Sѷ`]QJkwRZvRbtM7/% VMrlEbJ,Re8F8Vײ]zqvQy Xdl \f'ӷx8-/ O4Bc|&:JK8gz}fA,V,¸bOTu|Ł/eU-UO(;yMV{(K:BSI^_ӛ|It0uޣךRt%@Xu,Wis;gw m:Yvћծ#][O^?ӧŁfߋgR=,)f].v<Nfv/uX۽v/Khڽv/KxjV#!xKhY%{ ^B%{ H K*TH.?">=`tl*Đ`"0-: ;L{a2Т|E^%kPp"3Cb%I-r& Sd,:dMg]ZXB[Bp|14_pNn&?9{eΜ6f eh۶9<$6Z3xBhUE DJja9fЋ{9Hř -ȇv4g?V.>8=)"5k`X H TNAv3 M1O2OJ/TC4^"R$/F"LY+p@8"aHg:RP$m32Ϋf@sb ֖*l. x u3틊 ,t? Otօ'9^ ;ȦspO??iTެi.M&G]M.&\~7q]tP Jϳ_O}w>2,F,qC2rW۠y WNF 8$PZz湎ҐL|H#gS̄ [DL6 =uzf.µ% U)`ܫiϚZ%զ :en$m$U9xS.臯C IH+^,ݸ\pPFm YiԽn!7qNZ}FhoX +|^,|;kBb塥)?:YZO4QP$mh"HcZ۵Yޱ eJ ?EMeyDb1d[x=/_,K!`G޺Xiq4`Y{QroQ] G4iw41YPE_#٤ĝ%q%ܙϊN", פ ,j:Rjsfȵ1%i#漬=AZW:8h^:Lܗ`0Qv@h=R:7g_'|i>D(lBB,[JL.0"4zXh ړ~s;XH>(L.DpCLDsOh6+x(ޙ 㐍8ù&`Xz 0E%>|s공)o TveCϒݫG۽JQdGsU牭+HhvZ Kj3A63Dqx@ 4T ϹpUT㾼!O CPvqɝR9<ΠUP&dlz=;䐤XHMLc1z"'^q-S֝shwwyYGCD!EYP!$k)ƌcʼn"$jP !H))>HGv1<)FaMQZ%T%7t+D=i B]%LR RYU4.1#Yb)"|lH%1Q4{,ռ6Sroj%F>FXz `!N K9GI5C6:# e,%Db!'е?}Tkւj}0UEth@Zzs,/QDP^ oM Xu{21<>[렵Z[kr:kw6b]o]6?h>*"͸bǗ; u]~`!OV\,ŜFP_͌a5:eˣ,SRKiH"޻- w;wgJԦ*ۅ/vc4nq 0RA͖~#^e.W1iEBr2/9DtI6f?T9&9FK.1TH!`_?nEJ8&e(yL 4QǜrMϙ+BCy]Rɹl)UWt'Ix͖Jk~=2-Lж2;,q;+^vrkצ)u$ wܜM~V bk/s.Ɵ˧79htAn͢Ax2'z7mz|xwmrh 67v{嘛|Գj|GG|ge[*-Ds>M+o?sג>lHu2ELqԺ]oKh":ƀsȕ"<B\l rL ̟ʻjwѱ}9k9V"D֐I^xS!ϵKY vE*bw(joFI*N/Tj4vˡw q`:-RLp7}r!ֺ ; +]{eGC 1ނ,(D8fFP!cFȱD5(NuqILª1jveZV^j D%ͽnzsmX&'jwT Rq?cӋD= N\3JreY:"y4g> O~w~ju|<+.f3G6b`h$؃ s j]pB;4^?ކeע k E0D]DhK9R,]|$bF[#DUjHpHd 3}֌;d ^AP4@–B,{>I |NB<6!C`Bv"$)gvsj#Yl鈊'⣈:g^b(7ẍҁxLrZXtqc &h]EڔTm acʽՊfD&R6FиE-A-{u+v1tf8s7(U4Yif0"K<$~ЬBzn<52|"R  bo}>;Y֛݌{Ubǂ,>/I>Ci\fCR$?GETq}q4cE,' oIN|S:̲ѷK= 1Lٟc }N h:6.p =~/[S G\ ht6Ɋ`ohk<| pLѺɀjUH24.w/"\ŵb.tnb:3ws: P&RЍ,sMw;ȾXǴOᬪwXL_NrK=6((_u*ų4z*~ N~W@$˃~.u>W˚`YWpc `?%fA~;'S )-ӎzM{;F¼)zU@`oEr= o5A6֒2a$16崍.$zvZl;E$ylEw%zOM2=22Z$ TC^VneBn{#km,6q]XyL2 dv>-uVږ{$m~}XG-::ly0:,YW,rOB?/=xEjM wf/ɴ\L޾[άt73q^B mEg_&"OIޑPo?rl._H S^u66] z7cN.'}i;`G]37j/ښObu5\}J6rVsyka[V#CvmIlɇwC˫7?y/oo^7? ,h/W~!spgy=VCx롥1C\&|̸%=Qo%nv $?~4}?/^tgI3^ͺ. h`rMJg& 9i6e ʁb"T"_G O(uttB^ B?)3"o ƢA2s?%7`/$x 9^N)5Ij~xm7[t6:JLY}6o=CzH־a:IKrޝe ݠH'xDPXOl(7{&9jr-KqY*'Y1?L j$(,L*DЅ1Z0ʯ7^W%Uӳ$_#WQlVހY _3 ֽwjgwKpߓ9ߦ?濐}nj|]_31F be=c^^K.v/Ҽ[\6}gOry8,6 oJ?S!vq-[5n5Ni( iHtʴC5^ƈg0*cV/N[uJ`HC>8 {iG# y@i,>Nu Q<*I;Mnjo3;Էdhƌ5B4F{Ĕ8ĶkU;ԏFkwJmfā]E!29̱h cui ףLm=br"{Q+F[\4BI!;*x_KDzFG6m7M/su?! &Y/Rzilmؓ漐. #ۜ3׆e'q - ;1/3ǡDĨ< AondaܹїyVc9$Sw-f5ƈg*>F=~ӳ5n bތrv2 Fitl.onX{wx -[ZtM.i vpE"CJtܚ\A jo"B000YKܣɾ_ {V'[AM%vjS{x08]%RϷ+f&1_etu&˺cJWďSyqd^k&u{ʊ +[q!U&qY{n *dj(DCRKO5o*$IS@#SQd Cl;ͺ3L}ԍO -س13@LFT!@h#(4a^'Ğư؅6ƚM~ [kZKK^8m5{kѻ!GPhS5s"3=`\u֕HCU딘%!$PFr\ VȭNGPh3 "d צ mTe -h[ou!*&]dDR[Jњ@ ݕC9l3Ro$W<[pښ8XX1j.U4H`3]G1J掠G!Y\f"C6(K'ȫ+V'7) l8Jm@{aI9^f ߟn$.-/xfIߵ{[wT>m{-m({?_ܧM>g]) !E㈙ 3˒/; ;OVH&?J[p1\ (i׏Px״^Y+k^K*ƌRIk/ *c-q6v˗|WŹh7y_$8:7ѶI P|@zbϏHS=o ikJJ0P4ަ" F[]dKH!'M\զ<"6ѵ9kv]QmBo=8a@U_~|qcV7O.HmƠ1)-2@sa6`+ <ف.4(m| Vq$76SFY3"k˨E6jkI RQElM:܁XN:vұcrƴArPB]цdPH!h1묹H@#k͡:Z4i'M{LM;j {h}vZL`TuNfDBKdk<;OkQx_bbD2yp(؀%"K,{n?i<Ϯ4=X]r|y:8jv>SDaf[=)'n68q4VJ Ea-;+5j!wmI &GC:$@r pC3E*$X9w!GE*c$VwWWU)!$I7a1`ʜVzHXG"\JlB O1jveZJJ/5^#_ ;57r.PuR-rXV_\K+p~<SZi bZVa񝓜yQtfQY W"lEV〨m)GD K"T*mDZ<6 4$AOfPgs-R")9&E@5N+!ʓGjQtXua_.눔V:/tcR`HtS"te)~Q'"㇁떦/4Q$%Frnj(1X;RTN +1csÝ"5HULP<[H=Mw*%ѽ49rQ pP 'ahphj_", +(W?8s>U~hSo0NQ*~ǽT]{*o7*dGwa,}[.@~޾)"5kഩ`)"!&P |1<@ϲ K9AbTeιK}͡s4r>ݰM"N{0EX.ɂYx)Pa Hel(ξeX1 θ%0z63񥒹~_ '|12NV~,5K30S`> I}'+p@8"a:RLݹí?'ݞg@sb քB1tI>FU)JWN ,>'㇡F.I)b_~[˝R<鏓xr?V&8b4`"9P@NMCW_. pMnlf7) $߃/Iq(ÅN7<./g$/K*z.1v߉V5qbcƠ̓]?6\KmdF]mN3i&1IiɢOTV3'8]Jn$W-ҖMM"P P2lz 4}] X10т˷so)g^pvimoof0xjʖmU3+Y> &.&8f/&'denu6ȪV+*\ )iHj8a G.',Pt|͊D5; ~Al3׻q?zޥ\]y uo&^qp~:l44kiho4UNӲQ Ӯnrn Q!7v)-ƌX3Gẗb\g_np1덳Y9NV 4$PZz湎Pm,tpXG( (.c3s͟ǸKsŸd7$6 06b)%ȫ`71 6X#^_"t@mOoL&VtTK_N1 SI(Qo|ibϚZ%Uf :en$]$Uʗ/j^8s1N8L0'~W(i/9v3|%gSi>R2;,e@q왤J3=p9ʻ>?;ѕS0w.@#, PmmIG0ψk 87_Կen ۫h0/Ak:P?ᗺi߹[ج0j͝my,g} ž0{žQz75'l\v+v6`>Qټش o4q2 Xe>?Ls~" ׈$l}z75K,exM4odԄ/&k??BM𾢍aYbhLxKn%zNt*"q˸e W%v,mOcKD}pF5'P^a,׷7q2o-jf3 6Ȕc5;{6_PTav}tʩ/JFHۭ-p|JdXd tg1)BQM$7p1OYn4s*zhX/:WlYʪYLeؗՎך6^T }$.u#'eg ԟtO .}{5LӒ%ՙ6^\m&^"—ZHv]Lmػ.xg.)}.ybщn nrBf c`QNgRrӿ -m<5Oc7^њ\ HyqDe|n x$ )2LwZ D佖e&ZWJHKģݨ̏G 飇ꕊ@׶.v^5FvR]}^ t#vzk_i&O\j^;ɝ]&^=GmyOKu9:1!Bj m3\;ϸN)f*A(26zCQ{0rHRIpxaR[)ıvZ+n߃? ܰsK[ﮅF-^ /ROzFaVz&vhW{uI|;D3-ȂB`&i^+.G0fD+N!Q4+HXG9,Rb@3 I5FҎY | \KKA}g Jcݩۄ  eIh$*ruš` RZ%Gp2KI]GB=`_P *:IH g밊a,V5S*GQ25&rU|0p%F>Fz `!N_ZoȑdqZ2NǎHi ¨C~ϡk_ ySn|Mܵg Nhe m`ɄH'25P^|Ȓb/e^f9җe0Y|׃d0޸H}`H,5m ;IOyTNX^|Ը͢fes̄E+N~?<.5f1grH8+r^TĦ`|C3C:|\(w` O=cF@XE NVFߕ=4S)q/!-Q]P<_y~,%ۖ,j{.'X䣗YE1}y..mV} .zwgz'{E7J`|QNr0Ol>̽'QDrbqT0 ml){^{:GSТ|EN3X|Ȝr 8M(rneZGEdQ0A[R.k%OFD&O=_ 1֢/z;']"hDP; B;Eśp9%& Y@0K*x2ix4X v %u _ y4H)ʝb;Ezf|ϡC҅4c}O8%nܬeKϘ0ẍҁxsA*A1z9ւD+~D߭Z#$}Cӌ;;[Ey.lvkx;2ݸc|cڒNLgMDl*j@}9o&}D:'t2 x7ξ첖gvg[r4pp_}N<Kc0SVx@G 0j 6:X)H r, eG-1uQp !"}0@h/eXIJFw0^ #̸;?6w7Ig>,H5Fz ;Сds ?p2]AJڔ:$2 FI#WOO?_96`hn)1MKUc ( IqDD}@a,UPX\ؒ:]cƭ04H3'0! @ʽ8@屔Iuf`mH@NEA VDy@iBzN*Ҧ] 2B .zxJ%X\sm7A pERLji`)^LS,A޸5{0  M2&3p_"8@bli8p`NP$1!Kʹ JH;lS.|ƿٻ6$W~M}(Yv1v:I@@Yݸ@URͺ2*29`6"QZ mĵhF Ô4mR䋑5G`YN!XꓮG"HԔas`#sB74XQS^iuV1G6V8rBl(򟍱8hݵ|3yv1v筈mo)zi0zGhw:cvyN]`5TPOA?ga޾OPe{ K2\,k{'Ob9w!cJ zgi۬4uxw$'RiwY=v˩DmfOH;[fS}gG$M'eJ߷[JWz:%w)pո6 }V AC\nRMJsIi.7i.2䫃OP`[429 ]h*q /+E{z\x0y]3)=כbYy8fE_qlHNu߅HbV&UIUoR*˾EP7(HN%3c&=Dt8Dj64wqhG.)n92r[pn9-g : 6[n c}FuC{(F]F[r4X|llEvːb LSXg`RL*Ie0 &2T`R(;QFF5/co:U) D~T~VRfǭ*-ߣ=ƯXXF!('!* ze@8M?>w>׉}g` UiX xv31A㖘=goJ) 2r1+; uAgܕzm̕;C(K!]9RVGd|АBGJTam14;`L):yV8e4[17flŌ3bV؊[=g"#fagŔ&x̚;]~x8#J{,O=?/cel5I`Y9QCk{7sVb]RnL(jXE> o#[yuMʰWپi˾т-Ui*h63"ae +HXY2VV6l n )<ȇQ- Q VXX%ώ Łaǵ|ѦΠW|W'M\u59] x$ )!Yzg՘ ^ˈiDk45[!-ݬں'\~jVɜ> Aw'oڮ@kOX,_vGE`.0 ,- T)Jkkf_F_nsNnNᷮzg;ZJ`rhAFXe{A2FHH jbKFb$S, A9,aKMX2KZ,imJEKX BZֻCh-jUu.)o&/1KUśϘ? J]Xb4pu_(#'^iShFI,VG]rp7=swzvxT\⮻>K2c9(-Sd&YL¨ &RFL"R $#mŜc)sZ雅Gºϥ&LIX8FҎY | \K8Gg '66=s>?I]&g=]}MNGJ []7?YK}症ϯإ/ofR'g&Xݤt;~+0lZNErT]'.G}( ߦh\ nf,[mU]il|@<{]gKT1QNh gǙ2 a ~S.2K@V;GJ*+5Y?2㻄uuj:!QJ,4AHj)flQP>u?xʶ<I&m@al+MW:nR-OJPa0<}kϭ:F-X(\2)sξ4^x6销AVɷVc+Hkn]7DV:Ta\0s&വ;kH)EВ© U^F rΣɓnw._<gW? 9VVTP;@閎HGiVNR1&rMvKAA %r3Tr$,E3W*)NJ⁍! LSB"J}ШS"(RR(I\ZDbgj?i65uʭ"[Sô餈<2& μ鋦`-beMc`;Qߛ~Xki1:l>twmrM|*[Ty߫ۋ$;"%WoxESGQsop۰-hJhq|9 hGFf_0URukQ?xEӅQZߝ4}4J5̴񋦊>s3 ]fG>.F#KNX/LڐyӗLqL?x>4N/#ǸgwmHǴዦ:8󏴧o5>A Tm_0M"}5YM:T{[:\I=A%P6L^&L J-Ѣ"4;˜i/#mr##r(WNI;*]α\s?zхX.94 Ө媿ܹ?PB u, &T.{&M{+撶G)i{2(T͗ryz(L:$޿v,uhU\$ mo:A`Ťml~;WفܵWQε]nv>jyg-/FfO,UbsE6.mr˺vMk4kSBZg/xl9}iT4538ŴSै ` fPB`NqyK =KTzƈڂBzXNIzZ%"8Ayٟ5$x ϵ D;3(zmGȸ t "CQ{0rHRIpxaR[:)ıH1Wq-"ӉA,mP6vB]Ä-CV!A7 P+&i$]3@w ~1Ӓ͛~?m|8)"r(BcF^Pp$kŅq1#BXqQYA:pILJ;f2Rʃp-10"Ɔ'fc$KHQdF"X( `*\b WHz/9&9P6%ZV`4Xőp.uXE00>Ԛ)#dG(HXw yrU2F1wYI##ckmR@a!NZJS T!kEGKd31w Y;2rOU75Bk괭^NeM;e3t J3;$dqa@өz}~׀/rK) -A7(W+@d:CILt%S1G4rneuٻ6r$W*|ٙ. H]o9lx:T3LQj힘C"K$(Rr9Y BeV"8'ɹ@I~cG͓ٳkWY6a/oQVATwfM8+ڬY}ELUU@\H Pyp?[hoLzQUD\-twBk8CJY~g_zF[o]UhUfծnYC5mB`6;u:9LzL3X( ߦͽ̮U?Oծ>>\Puy\Cn!4Rt髐1 M Di٭@G7]x( uP]Fo)%~ M#Q &H|PR&vץqtg*@E|PMjTr§{#hWQ}Og?7 }[z 1VeuL`BHd`skhd"b:%*xbA#R0cb)12'?wp|o 5p:"opjf^?~.ϗmN>*ԿQ=Ey,(ScwTQzŇ<- 1M.&ÁŰ^ȉCdr2o9ොLj2?,gobH28&XpbbIjh2$Jk 3Xn;]]| {NW]U3]\q³x=?y<PA8xU@tqr cY#F9lL\=l!H@cu\'NA>ѳ[=w3v%N2oDϕSr~!;[\> Hy (Ts<Ogw4j>hq@Bto98|R'=p3MotͧV%fo RyZżw41}konGg.@?l ԊPsaK5Kh$=|uy` #@;ߎ~*tA{X39(.Q&adB/rVHu mm"A]~Kh59hu&h/ƃouۮiuwCmgY2_-J߾2+n?b?qoHN;kd49Fj,qAT[\.i:EK^2 H"QǼqANSێ7+2:8c[(y:ey:S;ֱ88@2쾡%޹:AQ^k"V-c%n|Kιt|7L+9]ܚzB5Nv{9t_4e3p83$W&1j|66R^p[EoZnip7wvFΈ΢wD UuGPB-7tM,gT&4Ӆ6l{J.M:ze&#߅A}B4^y SgOui{tQFGmPD#ٞ~ڞWtisfx gϔ{vS}^f@}h+oMm4Q("U)x;6vgt]#iMuzw2ө|`2ӧ]:էknIkOks:ˏb_h_4a&\Ik _ϻ[>hy,Rd| gL);-l=DyMG4ß?hSCVӚSDpJKq{Y>M8v^o ';mRYܙd M>VjyyZU !9GrFuHB~BdN =$R*M"X{/ơG 6-Lst%P5[Oڻ|{͋?kPp0:`vJf 3}sd_(I$ LP "&7|L+]B9;BXRYﴠi*Sʑz)D%D^Eօ$lE|b1cahJT jQ,()52JX#U\Z=zH;&4Žg5ӫz!>ʹ Nn7 #C$zc_RFdmM +Ibx2>-Y|ɲ}7(/߫=(/Tsiϳ(%Sj?jI+:D% 2#[a3htK;)|Nrm#6'"z{&i唗Z)iA|[rRGƢqQ;CJyR1 z[8It:CzFsmrK/3oZ|6e7Q[y嫸4ͱ}-,.2Dme {{luIJ+8$UB"" $55J SD+DY[ [8{]2$<"{ǔ4T+3QK[3Q"P }ɉM43JO6Y;橞wxyv)Қ{.kq$E$DHqUmy)){77%EQJSRZ8:9 1@ߨbԆY %֊xG#joI i KT}sJPgͥ%-NŦa~N7&^W*Q) ,8e${'sXbT _43Q)j#"H`4i㹳jǘ2D@Gi P/ OWj: < =R)!*8:71IgzB4Q!(AfӒ)E 0w2>LDJm/2@:$JH扗״4$SE#.~ck\t1 ܕUih?Yz|a&aV\g(mZQPFk%i#,LY*g lc^żf|n љa,wQ; Ieŧu~|1r[J@oE.zbrs{QOY\^LsW Xpa@K*;WrraSgiEEδy¯vݿO~\]g'ðFs*ߘoAWSR=F0෻ݷWf0#m? ^3Fr$׏tS5 aY#aɒdℬͣrBMwQǩo(%$ ́}6{wc^.9 ՁWXީFxCxigqn/߿}7/oD]ˏ$80 S;O_;MBCG ͚+Z6uM.bڞY\(ۛOybLǙL{ t%u=_~ b~3K*)gYtx6+wsA< T !Gz]*ɴGZ+3w(3ʆ1v0Jo'4)2)7/ =$eU`, *0wHs(+5r~dW#۩TK<^$Jg"bK{Ib&MV U [wGJ|#DuIAtb4(AW:nb0mG"DhPSceO;7ߏH?RUzWߜyp\.?@ŝ\Idf}& Y-]~IgexX;ɍSuȣ٭I`hsשk5>i.&4 Zg:_jM[o9otfcw-3hZM}΋jY5z^hFʧkA6 2ꖎ 7#m v3,_6ѷ[h-=ͺߚuoZ$\oN&)?YXj>U Nx?p)-?}:K\=U-7짳(Cݼ;a"#-:Js2`)#֧,3JUdJ*8$=¶ M3EO1S찲Q&ѤzsI&`ZNDӂͣjš e@H6-s V =eHi$ufTRigg QdR5J$c#%#KθQlF@"/1~b9К*gR>YaZ|Nz3Iw%I\,rf 3ň221' 8DRl"i - I `+{?Uep2I)8 v][Aa]F) w*z&'Z*3wi@sB?5NcF1`;4whQ9`) igU52fNz8> UC66v'B7{ V%n]ۊ{dr~,TRįGxݿ$ʙq?v6^?0} ysI&W3vR;G5]}_vg婠d~Vd nԻ6׽-6O} 6OB`~'ߜ;>GfArQ ,Rg2DOE"k> JnHb΂wDðq IU\T'Ye 8]ջ [[.ZbIDb |XK zs[F2[lpYZ!WV(Y(63DˆZ&N2&(4l,O֟dCֶ dh xPn1i-V=HjȞlſ:(:7f \x)c1r2y%lLK*yJEሲAjQd1ČlRHok(x44 uDE_OE_J{݋͆N:irb(jF' m47*{):a)[CÆsWgJ!If(s\p)nv>%2ǽ9-2͘A6ƒ E ,m˸:bN&Vr YsL(QmGs+:2,"1rbrX+"D4>ִC,UF; !*qхx{c͑3ư{, 7-;Ӓ=Ppa?ʏt▱Y_n؏78 Ō93˭":F ~/ڰu75S@&&VQS($<3@cĜEUp%8ZI},!ǽ{ªTۤg8G"0ַ)V1<[z~,d[oa[ơ/`"e_'_pekE$S!,GChqܳ׀r% 1W=gVP{.O pwˆbuƒ4;̔T93"2֊;èX0`%"IA9X<\ْ^P ͡e9JPnڀې8"؞Z ƓIb*_gd9 l=EG 11?lAƕe.Oqޝ'0I6Je!٠LfL )&-eq< F>;2yeC,5@ <8' M"ʱX[ Q5*CzQHmOjRz\TknlԨ:"j@˫ 9 TWZQyV&(80(^=itR.~}9_3`hf)1H)h[pԂÂ#RdcG*1;$'yH9+Ow1ox:ȶ <8SfϔMk-<7?y6z6aFP\ R+R1QZ s x o#[ɝc@Mw ;8K(vYo[@B|ƻ:6uCnމ';|sT@"f-;H۝yiN҃GIڨ4(MLG GAwZ ZFL&Z ih FAd5mQ"0@# k.(qCv?{ƍ /ّpoU~ޓT%9UdaWUbL IŖi /.CIP2l40@wџC%ZPdpD.q{!xoF>ͷ]?l$XM.\1] HSxSةwWIiH!$HMSO׽ZBHƱ`c)?R*Msv}A Ue5@Ѳba3/ytVրyI |, @I Df$6 QD<.ր[aY.4,I;-@37S*Д<*S+ ZTBd -$;!cEǞM Q @Rpj"QEEG=L@%бeѴiEviOZv[ʲKKVI< <^8-'9ׄ/_J۳!NqOWy< YUa;]讣[A*ι526*O< uSVz:S8ղR#jD53(խ &WimDyw-֞,-%;Ρ!hx2&0* (.%k<M 4=pv@^vP=@G,8$'iP `.8a6qF'Ak#uzfg)z{fCi΃_afҋ]=Lndmn6Bہ>>eo32;D-m8w2:M{u.{ZnhvӅܷVJ7sA| `hxj:;Ds9eķ]YV\iۺ#3JԞ[ѹoڇ +0R'k+"z`XEH1,0JG)*ra2y"(U$:F*[qc-*t{6t Q(1xfINhB胲R ґh\='3 8mntg촸kϋ䑅qX83)5k_s/'\>~L]M~ByS\jSd 8#ckJ*xd\Dpf y,cDa,O\LsEC`։=;)=BȬ<;>ҘoQ LhՌTymyN̒dΔ!9R-r: fhBpNR n렕rkxz;-֫@jAmT|5t $|m2&G^rm@,%t.9 ͈ (*Х \PX, ZV ^ 4.rI$g2?xp bjufj٠IOZ)Kl5J%\D&:pW"71+I8BT@BRV\ѺY8qv9~јy_Wdޭ;o!vG-vC9fNFpi8(hqIK $PÈv9U.'/zz(יPY4HpĀZ#Ԃ2I<$s:D5wE=S1T|l/љ]RRltƔY2!Fm%AxKR0N+ƅԩfAߜOs!&_/bȝ&zNGP2[ZdvlvNi.; jm=+7k94'*e,hQFwx0Z*@%&Iddp(ڈH4& gՎsU  4]CQ}I]t]]hhJ๠=|Π&<r@^BNA$6_j·F?$?Kh9]JDHx66%ùHX lĽM|&}9+rDh^e*V5n P&~{?>Gq˯??}O?~2}oNw ̢0,GG1@3w7?Aעe\6]C^Lx~MCnQCVm߯vZ8?|0Og[O>t3+g+/ ˯ ۨO|D?nq%W"KF af@S\]>7ln_AqfT5JGI~;5Ѽ.ONiwI82(x #MY0`NG/o۩@A/dl1K,04KPPJkC ũё|U/_ Eb Vr ࢢ ::/mFhb*Gr!A/L&IE= Anڀ%ߪ|+2ߊB`‹j6RyX&{"'2 Eu>hYYкiDvoM>* NY8xbx [i1dR 9))>Z>>hnO)zVS |JO))>RRS |JO))E |JO))>MMt6))>RSJRSPS |JO))E |JO"US |JO))>bR!">RS |JO))>RS |JO))ũ+)>RSJU|+.AWP+1FOz/_ }z߾}ћY^໫sŲ;x[^Toߌ_Th䞷{sOp޵zX6.}joQbn8>úǝ`8ې*tM3!v÷WVz^*_z~Ů8,\`0ybf[=yZQvmi1nnqNz7tqW-_ެ*o>|)K{Cۯo֛f_p\|GV5".7ϐйyq9*H'*a ӡ*ؠO!bFE }Х-䁕xxyyչ)M Gs$ jD̅\0 8XРӑy-hcT .;o3THY"VyP5ܢy2\ǩk2%s6P| lrU4$6-@]<.To>oAAXt/fҋ]=xVlD*KLndmsQ\jltAO )۶غsjz3yt Uig&-+lY-=M{um k-7Ci|RhUXЂƋya?l; 'Vi{66[~vS.7Zq6+CP{n%&ogy'?eVv' yAZ፪ּY*S_qs[AO#y G;\J8scc IJ I$BRQ9@ቕG'~8~m6XS۵9ڭgP6򋡷'C//|K&Z՚Y ȶ o\d5I]Mr gkmcG /E`fa ,^\-$[Hj˔%;VS-v._m M?- ެvL+]dvLyuĵy>A 1G8!gSZ& -/uY&w=m2MI($h2*&P7^}.rGz @xM8kQج3@t2L" hXU42kzt=;=15۬׻;׏%yAլ~ D&(%^e 0Q5Gl6a|9=_}@P. Aw:BNЂA3 @sv؞&V%p$#Rc $Ѥ:؝6pTNvs̬;-43 i~y3˯Wg1E.nccD/qbZ,e[JY'bˋ*.WTHv&}'}xlʂxgY~j#eS aO5Li5L.؈"hi-FaKg2H8de~]i!\ /ԧkߵI?L1݆&AOf.ǿ\GIn?M?|w{#PIg~Em_oآ2:u 1N'PeA J˱r6NӗV6X`)x6-;2 p%iDАH8@'ZleiTD[ i`$M |$cpN:P$Bc n(zr\-˺)O7^|u~)1pS9G::_qqx~`A|] pդʟ+IT ]LA(lSQXK1Ωx Y3 e;Bn<ɼK!9}CƳZ/G=vR\ CŐN WF9O;4tΑ ;ZXvFI3i7=?=vՋكX.'˫,x2?z!M:|Cȇ[lh(M}=-Qkdw]vj\ttZ,.m\|v&+kѥ>7)UYe܌rsr#mSਈ,s>eX&((]('Jj!7r_@=7y@ߺz5iUxwT->`C}['k=5A׋ܚvpSXCtk@y aXq0*<;L{qރo9SulaM֋]>lp!=l!29T(xQH\J}ʪFBlPKt/-zU=DBd$Nƹh2[MJ2Ge-왆$$ :<*`2+̌Cs/09^I*49`I•Ί#E)t$H&YHƛ#~]AP1`dmN,$Pr%Kt)!dr҃V*˜QQ|d|A^H$!]h srX%\tAw&Y ,RfS`]إ5P/t#C/uƜ^eqC !ho'k&@ac9$ʬ?6hʒ=f#N--k萊pmC+`*hD) `+j(xs7{%""ơf#{f_gm/\1j#Bv00ڮn5c0 kjѝ/ãv.Ev*4epDAwQY)ț<m {%-ّ"IWĔ$B6}\#ء$s&N/ϏQ2 posL.Ҙfn^_ AZ$Z띳У YkQFuC6»@ooġY9g8yq>YJh\>ua!+cTlY/3> BvgW E'$*"9lb )`k"6J`49L"YE e[B/"1&'B {el7$c9iɀX'uvJcB KrpVIH*&h!ckḄ6J>%UKbKy?^Em^E yd$[8%I8=cO&<}c;G*(cw ic·\JN DŽ]3w8B'L?H7T]V:>ĽsUUrI;Iø3P^G:Wk;[Lf"Z{~5gjul(:|힆1u;yl WǓh~N/nkͿ-+N^Cc;"!-{7{8r@Jwҁ/yCFK&Ԙz |w^r.w:D\褵Ff ّAh<@TŦ9b{KEw^Y.7oyu(AJwsq-7΁/d|6\>ro넳l3d(Nzr/>/l Ñs}VoҜ7+w,!tB{Ϋ.Vx(^v9YGge8*aÜCe}OxjMfR+r=AK$lB>(I-Zv{9ev  CFR|QE,: )e[EҚ|$^VN{ES7lRhF7΀R+֡G?/3{!~`2)5Xnt:)pʝWʤ9Ls>bL%T?ߨś #Y(T+&]$fHV;S(zic*(% %\EQrInyLH(gtVbR TK$VItHi CD`κT΂(?D+dF&P#,WH lAXج;6rl;嬗߀-'m`:OdiL 4Hd.dJ!SV )ΥV(6 "R_(zgdʹD+J9`pQq%eJJ(PJWO=F6Yl"H! %O` D!l=b8hmQ l;5nѴFCxXDK,Ǿ @ejˢ=q1XcɐCZ$ɣ5ޮ/~5cc##:&2hDp Y_1LjEߍNMt;qڔHԄeB˜lrdNml'sUuy>e57JIQVAXHvvL-+J0&%b:\Ǡ(+2_rDu8HQkD>U+Mq*~ϾI8TkkyIʒ?w:k4ecl;wNgt~(#{xoH#9WCC$^Αy٠3TؐBH4 *ާƤ Gd L%fA(}0`ڐͤ`Wdhӵ*d㑔L.i=|]9ОڱGo\M|Hj]R`¡B]YoG+$ܖj_ q<7`y_VcIJ2&MRdQ-Z3eM4t* p*YEd["Z-W@BWXO3wޓez,pRFL"R{sl0eΞusyB|haYhA>|'͢B0[8 %B[UKID̩TDm=H&dNF03g͸J&31`_ tT;:s%5Ȋ˔niJUɏŝMJuf+w?7T/\/'.*g;ðFY2:\\3K|fEzKӛMm7~d5DEcKn骩 ׍,n@>L$y׽Nhf3^78![%h}A68V$:J zaJ!aKU_{"y.5*Tӫ_3}ϿK??_ӛw>a>=8 b.c}^fMSŶiZd·iWrC4k2h;n&Y+cۛ3f"ӟt4`wҕu~6Ue-7_zӍYܕX b75/.=f܁6]]a>>uL T&f8=y*?@u v?5%?+à z#@jt~1lR-=\GXHeF \ b$3f (XpuD O$6 06b40AW:nb0mG"_"PYN ۞x&E=gWwu68VZ`}ox:Qx;sݘ!Kx|gmI %ȗZrV:NbٺG DAQmzt/ g:"k3mJuJSRw 3I4s/=$O _JJ} ,;X("ԧm6驦>(=NEτDKg GhpR4 Fr#(_GZ-ᨊVkW- "MT6H*MY>F ivttVx5+D Ŋ *mq_>P(`WgE~~fskoʛv\hɃ&$Kn *^:>0ԛK0~pȡׅ MRMYڏA>$МjTM&Xw._](fZΧ`a 3M]`0-Ȃ< VMkc)xȚ2mZ1w7r!Rr|_3n$8 1- -E%HJ,it;+?K25}J_31#؅D|,qD,\;6yF:k/M KI(+ 4)y{C4g~CePoGSƨ&jTUcܮih*' VU`Y/o;&)#f;#ލpqVN73M'ϫNJ蒥nn/J 'g`h3L!PҶ %e(RdKJYۀͷ_. c80!QkTp:8K5DF'#1*%Kb*:/JqTQ( Ȣ`{brX ]Jʠ]J[lVږɀm4i'6k_Cgٴ8w0V VYé>av8`׆GhgTݸ.fu!=;i&&r B($<3cĜE NVFߢW^ݸu =A[g4}ygOm;{+V;@Z7i Wϩ#*)/Jm|,хd:y7[ڦYͩW[o$zUa^mR m^Fقmvfkʕ1\7Ȓ+3ugJGJ̹l*(-M`4=a6 =ps ]Wzغ!/T}9T ׌vs0Of:;q=xM58GZܕ<2(!INaЄ\hAnLuN,]L+c.< 0W[&ӢeY#Ό]չAWvnqyU̡86Ud.JΡ:X9L$nj.UV˔lH?+Y,B>k= L' %"p*sw'`>_"zHٳb&{"쑔P)W@ﶲg]-{O6=#1Mbh$0. OEJb(qT0D =w?I3:e&5ܱ/4uƒ4;̔hDd;wQǎ!s,,;5LknΏC!ƽ\;hju} 8NHf+t/J,s}:/1I% Jm*CA ƙARLZsO(w`2jf{,e#^q!Pq c)VRD9k+B2倭1T)b+t@Hm_jRz\Tknlf+[w@]pB]úEpe%MJ3ƀ) Ƽ^#]([HQ:ӎ,ʧ/9\s!rFdRbJWܷQW9Rta, (vL&Ɂ $'yH9+OP@ixbn]@MSs#5d _=N3"yaZ6aFPʔT+D7QZ v-&$ -$ShoĠ2<jۋf ;Z+g~7DZo!l1x& _bmtn9<*] -ШdFR#KH m ja9g;=7__دy\Bʌ"]jh q]_4+3Ǟ?LG @wZ 쵌FMFS-x5k2P BY.£Ȣ*Hk7H&(FJăS(Ho fKx:z ؍bD <* s]`FʩR ˽>*.Ww6hVg4I\5do nOq8>9 =zib%hqWI;lX\"UMh _VgF{] F@@G%B'ofb'`VQ(֢tU(#$EU8>j ,wm Q`\xmo ]5~JHEzCR(qdRJqՏ>US}Jn)`nR}s#Jϲ#"E2OPwyX{'۴LNN&ÎxFKCÒz4!褀fQ2HDē?.$G#*kа.cEǞM hRpRQ5DhQ{VQUpM+V415Ik#(g|uU$\{Vp'W%fFw`S(L*TG#u` `A $ZZFb<'C3OƈyVr>^!Lؤv!6طďٟObwttbӫdIpBFwFjlEUHʅ0PQӺOW[!D0x@đG/ ~\LfMр*UYVa@2@7g\%l8;Jz:L8ɗR،8fk8~Lq7tLoԱYygπFpĪpCLTSrB M0~_sn.eXh%?E -RĠA G'fs,>qvy16;60xOs ˖F8 +B|76mHYeǝNFv7Df,$?$i;oxძK&=<573|?~G)c.ӸZc[81 ^Ooh +kC%M:c&?5^ #Oؙ'1_I1YP4 IH"F1maE4e{m1'mh/f%)  ו vqǕYǞ#V}|svxRcy՛ytHp*h~9\ t\#O<-$S@ $&A`d sRFLFiY"0 Q PT_g]M coB))!*ρzI:\)'+4Q! OxF5 v BvjA :YOBf*Ldm2xF$Uq헔֫zaT>X$IQR©9ARF@0b\Q1Utlt"U.t/[~.kA#w ڥhχGz %= *sgvI:TU'rt.8[bОA'rg:]L"^2.ͽ`PV셙;tiާ:3h-OC.o9=`ӜFs rቒSttaj%G'$:3ےOx ϸ+ωi::Mbt~<jc:8V2%ٗ_DGGpGȯ*~B`[͎XrIu.xVK($SWz4/=hg7#Y_46#rdWEƿ\Ox18~ tegK no鬫݌0ͬ.!'#G1k;eól]ljR+huc!a:;w4&M]Rky APCsO.NZ80˫qx36f[O~<3_q+(6"[|[KkR ƚlT,[F0އHتF%wnq:f`4$N]i 7-}k~b!~VD<1"*þt_iNwSku p"IfBDLD$OLYRhP'LkCŨޑvjUs{N J$&Td$輰)Z .:E/K^^BM|_(y{[X%CۼzXN/box+I ׏/ x觋| F6!$HP.G}39;W%Y( TD(fE&E_m58ҽCz%/"FjAQ&ȣ+ruXă]NzCwq0 ?|C 3n!hp0mz^?ջǓe᱕G8k|0۟*S>mVX3Qwd[9F$[{H#HTf $Ϊ/>ٹL"U` 9m u!-taU*(;QeW@5.Z=l ȩ?{S>m#O&U` ӡlЌ#yO(nQܒXsnApCmPDǁiĕ5:VZ;'J^_p>ZsT{~u?ȷMPי)yeX ViV6UZ?!1^'q+,]?OtQ C~4 d%dAWnxoԼ,|wy@SpWJ_ɚڧCϖ}do.z+`Ys1 O9е^[,ͶmMnTW|$1'J|);IOS[n6ߍmHNC߻mڇH{2ŜNVEʕa?$1 c>PH{nK{mNnUDHea7ш֢Dg|@ -=`#= +J胴zrRGΣqQ;C["ZT9gdmq}s}r2L󋈷٩[y27hC^ZhI1VQ I(qARC8Fy,cD+ˆ|/|C|_(NöJo _ގE=!ٗ<m\J<fyIj|_QbVG>y|U}FTcӲ@й2yDb*\qowZaP djG))IjoTŔ\M2ĴZk?~q%F({@YRLCljM#䪥bb6LL?Mgϊ{7y/B>s9ķo{f'_^swq\&xݘշ^FG:`ʕ [ P20ۨCCrK2#9ЄSZ5l:TXԤZizC:DP+Z63Mg7Y{ѽ)3_smޞw6YYIYFp|էKD.?: X7!0hҷ⧀df;ն1;uéP|1]\u.a̓VY\J7__I1?3eL\/?˻1n[fݬŹl(p~]"_?_Jբ6Y2 X-q3$qc7s)k3Ŗ$ %10(JĢЫ:9t?)@wm?O a= ]mHolQnz-oӆsI[yyF3.7rɿgd>30'zF\;2fP2(fL|; Kz_5tWkRRGs=u'9ێqӫA 8[SrOB೥{Yħ#whY}iL.<rw4O|xIg(?Z;ɀoɠq,[ Pj.`'"IS Z9&W\hp&0s엘S0lmD /i۪NzsRZG8|5L^#O!ݶ9T۔hބI*$.h9GJAW)=z(b V¹* q,!@ :6MjYBɤjlo6#=wR+F̩T`!߆MgDO#h % P KKnLK]gR9YIp/ww_&.wĚ/Dw\aPL%&3y2zaSK9\sW˼ڢZ}C=XonԵO./jJE_z䧏WSB[)f銶Z߯˱`\k ȐMvdIjA9rIجkrNSqXYwNeoo>GU5lc17qTQ ;@hBd*a]ibaW.Z1 r ~lTlsApy%9RA7j:!Zn2t7]c]9da$~@͗MdE~|蚼]Wˍybye"b7hoY {Th6֖rLEKƗJCj>GViljue&bzHT3)2e XZb-ԚZ іqtv[,(px-I͕Q}#-[?jJzw\ppu50c̑5!$Ru&$5UE }/`=l:{/TzB5?ߩ9o: \Ol#1DS\RuAdpЪ$%[`Ch ŒaȝVbO 03޹,R}Yjk?Ui !ʍS91 yE^<gl8X]Ւ[u󐀪Rh:j֑b '7gæ'Ζٺіڛ@ӏ-\69 sQQ֚^2CQq逌 ٲ NyKʬ<  R'\\n-Hs[3SW*0o/f4{v&Gl<5"LƳĢ^Q=5jICbL,:""Fc8N+ް5$## 5x7/"%BdYW6ݶ%quqbdlZ(3FDtno+Y}2-SHjHHPLԋ[vKA<15BG4GEiT{2vR5!5412-q)'ҍ)'K"N sɑM\HÓM\|\]\ԛiuw*j'}nE jL8))ďQ())tB))4l1V' @ EoZ)96%\|~RfJ|ı'dAbN{/~$/#l)| .`:l:{u赮./=oGF졡q6d+l AHW'͕D!r^KmRh)屛&JvA#S 셉I 5 *4Jdњnt<-r:H)6w ;OE*`8R0]K~(BimB71u`bdKCFj1%BȒN,_˜_.SX5 )G_t:rPZz0чԻwXUb1vB睙1[-ĶzoK0h)xp _s(%BÜa*Pp*L$Ijya;hahe?k~?˲sio g l"}"&N,-&qe|?θ:)^<"T]5ҹJ#ĆqJl(VrIT ) IhׯTK}Ɔd=뜓9vXmeB.t&xԳzgt~WEPY1϶&FSb~ȍ2C--phzf?AO- ZS]`zTV^lE) TVb 6THcTaTHadȇ "OY;oc.5x~Q-^ȪD훫zVoڒ7VQ0pX\sYݟ6~f}j?^扜:7%=: f0b~v:ѣ6 *A[ֱ %]D/L%$5ac0thI;'- &QX]FxEʺ׽YiTM9GL&CniAwmYZ(};NoEԧ?fN#td_n_r:qMu+n]Z(@}iܣnvX#c;[l LU;SWq F߯@} p¿52K?+à z#JMֺY?4_˿Cv(3uRq@D`pR# R) $Jc}qzwd0jt?pddF ~R,"*XM { VHKrd[/$|6GQ/u?;<,|_v9w]5~6 `r]pknC'J:7gCd o@Pak=]m~n~7lcG`>Ӓ[᪼I_F&}gw jM^T tZbF')L71RR D:< $Zڊ-*m|Dcਗ.o=_X\ee^vt-.>35kl :R )tf:= (ti8ju>Όݑd<zA_>Ɠ%|W;Cu_X`.7EvLg-ڼvo2tpL+))O$U%e1V(T@>nɮnٿ'KKhI4WԲ5UZ]FnؽO˭&0]waY5u^Zt?i|w.Ë]VQ ,Rg2DO;L=(Q"! %KyJ#LI0o!7YNK&%45#alPjтI#jX1uxU>\\H,YpY m V'Ad^Lky0,MWLN=ʤ]P0#:qIB˳8<;&-^42mNW.U%Tv4ݒ*zROi+&fl]h5dڡX2t\׍-M=w)}^[5/o(f_S oScb&D*y Mq`]ţN^4cS'rE_Mmh잞6%HZbV[FA%XcFh)5ρ1E)EU?ˑHrÜQV [YcI"!t!o} nm)!#ێ251b5*8%F"AZ%HYDE69&r6ȹQjEOI Z)"ZA[mZhmuZr{{m6ְg,@GޠѠuLɣwuW[Ƣc\W\f@C<X1teU$XLjA4bZ<\X>E!=;i&&LHK!zƌƁZ 1b"iZ+I/,Dx^@?6 ͹>}m: lfXg468c _a)PG' I{)fq%x% 1T8*,Fq&Ώ< za$vڹ;T_o %1iw) sfDd;wQ` J@H ϱ,~Y6@CWs_R LQ">Ÿ??hT|+ e=AGBPMC 4ԅބA:=T _IJ},,mmhHalKG~nٗ3z;=-FE+4hFjQ !xϽ:pw腀bh嵛I&(Ye34W߅516> sy$2`4`ao%J#e$S)^`Cy#NcF1a 4hQ9R3) A UtdlY/Kq `0mF>{5(Wų%HZbV[FA%XcFh)&7slFLp0ܑߪ-z1 - ʂJu7}QG臩לVoє70Ix#Զg@.]!D!e!Sj=Vc&ye4zl5BZ"X9Ϛ.7u BY.£Ȣ*Hk7H&(FJăS(H\ q+B ]QQ8 r&{ }T.Ú;E4k?%afv/o\mjrmS% ,}}lZkuM5k4]-'?.oj "20NF1J`O}1"Qb`N9:ZjFd>ZRWS\-b*FXP1g&ʭugf\R mu!9iхߏ6J{-gݓmjG#l.Iz^w4~҄y{(B+fsl̢m3G#TY"q<RE#iHQk %P&K!#a:0- 9r'a՝~ƹ<mu&EkCUE.qVqyІ8 j`:PXG ɱG`YoI+26/cZad(8Bb%#1 )DXҠ0IXk9)V$H>%YKbMIКZEq'٠sV Y0d%!+T@vG`GOwLM[<袨bJ9I*KTpi(=ѓȑ'GXQbarŘF-W#R8,Xyn@IRQω6Ĭ[:=V(sVA-m=3CJf/h̼;ίhr|q9Ee3㙇0S4Y `G>5T;יuւ^̄p? b ^=^Dr.* OZW 0g{ۺW!2@HضhXbX,y%+.wãH}+ Ԏu(r8fp(1dcDӷe@˨v{ +ڻ[?[q<䚵;8 1Y/sY' S)/v1&%+.zQS#wFQ\" R$')r<8Sg ¢ gc9;$_ gӚz65@r]|\)DiEgvrx{p}DRs]E;Tj$~: 0:E@r`RTW QigL)BUCegI)MR;bT9` ;֕)))STUwsFOVPJAzr'JBaآ) 6y & ./%jab^ l`%ՒBG"$T/H_t _TZO?AIDz("K PIZjW؀D c{ckN=~+'5׆jpC0|pp?2 _2Pd>wd,UbSnσn߂C[+bpANr|ϣd~EZ~!HL60kZU!睳@p<(FJttY~*e\|ƛW{rXl0 B Pp%Hkɖ hEO‘(ڠ^k%,4$Rr3ut!{Vl@l2:Zq7yhQ ~:(ZYwCzH2 1DR^XVD)bX]8! oaPL"+LҺ,.Q%dљ@e%PC)$ M<8(Rt㝭Rό_Nꟕ0a=*ߤ# zfa[$7$y{̃$ ϵΗݏl\=nEn E*Gyt^0oVqSN=MwSse#(҈\ĘMMfWԃޓ5jMҌ{56_O^5O<>'5O oOUn "OjGoъv\׬)ƍbV.֝|<~Z2zsp:849CJy$FmWNks^jHXm<ٔWՈc8x}Ŋ5::5,w6O^֟^?{=~ׯ'y1`L¯w"ƒφn04v54ZYehr 2r-~> jjơ(Roz?>>xLtZ\_QWlV[o|v*-.1!`@[\e}wxF2qlq쫘:M.<.h^}c.$guU) ,9`OO~Ͼg?SI )j},FD(uҨ"ubpi'ɰаSNKGGߨђHH! /T@±Q1R$CԾdIB 9i_&^BujggRJ3U)ˉ]m. WW1!aP {7?;btu. "FOΆ]\j>~ _d}T"iG[@eb^jӈ2qvҍS:%H{O2M7}s P *jv.8k6LA7WMW96)7h׻mfrcWw?GFRi.qj23𾊺/Ԗ{"/lܸ A*[>b|]䳫`\}Tw|*})%EHmkiVu?:(k@3N2uq= . |17ʓ{p*d}+}t&d*)NnޓbJ܊(qD**傆(ʐjY(dmYFA:}yoE?097z57L^Ybb޽nXi-ܼ 6w]{}gGLKxܺϭy3;G-l9@9t-ryYnbg=oZ vwB07|9l=]d~=/-'O=_'teV|wm|Kbޒx( 47椄-ksj:*SK}_^992&H=Uv{T*vV5 ,l΀-k2{]F]]C>Sң+Rt'LDc2C/جTv!p`Kf(J!U%C] wEv7}rN2(w8)a?/n7:^ڄRʢ,eH*KG!@JzGK), )ȫAs;>I}1G,E:3eDUy0E1C6&Ngj rDEּDEZ7cNaw&)vpqXDtRq>anCQ8E,h"()E(Nm0"´yڟu&"|;c,kG4^YL.$mɸbIY)[Db;T[vt&!Ԏ"w_!@lZ7ibܤ4Y}M QN dI6-/_Wfs׌W6Þ y^xuI۞??q]$62I"kIq/}4OM>?y姯gWlOǓOw#ܽ`eݍ\ ŝ+}v~Oe1]VۓF bZ*g~0M]㶫3ͳ&ϲ̆5B|M: wYeej=3wyNTp.wݗ啥!lPm%o )nWhއK1[]l;8:{Kvxi=`ËmM d4]K Twq=Bͳ܄ݙxr18AF9cPㅗЧօU$ Fc,G-FJ:ew{hueK*p&P·:Bjc2ܽlz=r]U|5KLɢa,$?y_ojc6hdK\ۉER."@k.DB6K_D y!ZthRˬ978>=RR)Z2TI]O0t&t:,DquEKwm$$YyXcM6B(KIF"[Wb^Jdgw aD 3:7<^S˔! rA !܍呢ZY(I&! /[˔ 1BViS)Jt<9/6* p K!b [.،h'hɻ|~61_d}zbcKy`+Ɔ.wuMJ} 1nwJT@"F&}.u_˼DW[(ư+%}qu_KpK{dgfuuIɚxmBV5$ U;Xi_hd8bTn0CB iѐY[I7qm6fi1?ǐn2I"x tL8,)g4NBv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; Nޙr!( s&;3^( N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zN R~@4 p8 (# @ Y{v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';^Ht#'8(ܸ7N Dq3wQR%:$h`'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v8.-{fԔz\^߶ׯ_~]Zպ2?-`Qˆ2.!`mǸp-P޸(]dK0.}!ݹgm$c=Ki<<Th9j2V*B=8o>~Nե`j?c)-DxiPY{{߷ w<<oT~H翌}xN}HUi |:Aq^ʏg뿣^V2X1W1-h{c0;k/x޿f=D58Ipnv⚯u?8kVP,0 @fhUФ)`Pq芴 oAOYEOy|y42c[Ln2?hK+=h͏'GA}܌6 jj0Z!zkJE26<(yÖ+9287JJE!Z-sWrXJ*9h{t\QJ \es+WO=+ \Y߲Wmeچ72Ggto~N%;8Qs?R^_y%+v0%H]m3bTA ~ʩow;;N˄wjZ*r&riQf>$Ik5X2Ƶ.M25ap\mﰌA_V/l<pZO*P]I_8ɧ/䮛6'֋9ҟ{gq *Z'x.,^NENW % QC>Rӭr:p;ܤ%$֐ ;g} b=՞gOgօן0ؐGP,oV}h˄O+[NpҘ>`Cn! Pj߿h˿ou#tp Gt>;tdqi}6]tz z@k#G}%J9{b:*~ЧiGk #r.^*[4~5?Nj#ݣ O~9O@ZF_ QKzZIWήryre+e85EjuBxq?[zM8>=Y8a wg"r~.qFyyOucKHҒ7|Yzn=\Btxjg;ErmmO$M1>>=;tfل"xmZ+i_"<3}탭6 v_z_5?ߵ0AAͩTj'Ty.~< >yWlD?5]vwMl|>cMlk B{2z"F+M@J!5cLa*jPfN g4 (ŀ`2&VNZeQfx}Cȡ"v98;şӮIƱyb_velld=doMAf v`i #KZv{VK~,K%tc5EU_U^rED$ΞY::IׇH)cغsjzuNmV;GCz5e-wjy|05zr5?L'-_y_-l;`d|7Ċޙus^k.hǒh/g_y%>!;=l,6ߑm"m؇jcU:Y[h+?$1<Lg7,`]@)Sj3l)*DH9Z:f,Ј֢Dg|@Ӑk9MR6s QNyFU:2BpPѲʉLTrK,F9p#eTP Rށ3C_NYWWqi,o3 w%A|sc 1(!! Q"Egt Z ʲD/$jC y'9YxwLHcNeEt Ō)Tzm(yAIeʙ% !9Rͳ:q0#Bh$*$4':ZWZJb)g\ Q'',# _H8Ӛh" /x(KD%MPi R˽"pQO38(ɀ/8\ 5 ʷ< e (jZ~6NZ ~gf(J%n"Vn Q#PB&c"qG HCHjm2GgIOlI'Dvlf#<"2d 5i9h#41fSh1Gmi\yD襭Gi*7阋Ⱦ a"(r(Q4& hGU$f!1Ӊ$Ѿţx,^n(!J/\ zQg;L TMe|9iK;CN N }ɎM 3|&+o_hUQBR@w&ʥH+ՒT!nBR_(yCR>$HD*Fm%APkϹNxKR0NK`\^)D1Nzc*ÔK|88RͥEiaZ5.\`|2_7= Vk( OP΂C^ШБGuX0ZH=G%&A et TJQQ2FΜ6;v(C44 E%J-gzY&<@B^a&*% {`AB*dX-XT')UHӠZ'ɠI3Q/*71j{'$)JBJue8"c(0d*b(@2G26>.d[ me gP܍;PM܁|.? 9*@PGc2AO$|>Z1wӵuwI:ǡQ?op*\!UBQpZSLr Xc4U@1V:*|gO;v9v|}u[= TO2_EDA a:34iڪILi"F9|Pڃ5dd^%)Hi(Q2%@Qj}*s?4qGOcuq(밿TniH)x*_rނIfNI\/wtܠ<(J"@DuG4*t*4 M< RYA$; B *zSl_.}.Mu%чс >=YeI &4bӈ{zr,7 S5e>%А]ˇõ8cNfXC)PF(euY&Rq:?Mla`G;JU•}ً޼P~Tq.Q%gGQ}i.z?1\K&PˉM+Ns5j[3j[ͳlu͟o|w1;m?xw'SCW~hxr:_YfWmŬIԎK;㷜G:s0٣p*fM> ]_9<VMNQu1ɮQ;*<IBB篓]_Ը{AC:̳ŚN +:3l8?)Ӈ}w1}/?Οpf2$ȭIm+6kpo~楆C3ZukJNc{⃠,m ڡ_jF8w.nE4?Zg+Q|da_AQD]4TU"xYRBhD.4I7;nXi#OAqe0UMRPNi ' 0vc ~u0M?k+<1"*v:맶>i29 Rn1$|Dnrnсd D@p)*@XJD3:IdL*E荺,=y@+\$Pj%1'T$¦hi\t&/C"P{O|! 692"' x[evPl"ߝ{^ͦT x|\ B QW^P̊ZחQ@,~T;yPT_(o+gZCcvRR1L3OQ`OKѝQ'DSKw*Upˎq.U,)!/}ڗ>Kk":.]4%''=S V +9bxpX39(.QۊM,(8Ȅ!Z"ՁHOnbn:xΧ˖n&[aNeZU}tFė, (*?Lr6Z XT"pqZ+ #*HjW`oHhY# KmdhښKڃN], Ut*b<./@su|j/6żg][,diY叏b!f6/p4/~2 Llj `~97y{ ~..?j;5<;LP?n#[QKy^jNk 墡LQ.zT59K5v֮g7~C6mu;/拼\?/|ڧz^o?j板i:8,^yߙ]L|} `U_o l{7qp5.IE\}9- $-\oT??ѨO^4 B͹tePΣm hһF׆X'FԓTji5OiFinG8n6kFW ¦߫K!_z gKL6)m]KV|4%'*a7Y-q*𧓞8nv!.r~'SSC:z,@ĨF]bG7>|2eb3!(0&g@N%eLT&$"^On"HA[1rmww9ʉ6ˣwrף n݌~X gGG .:'p7sU󻘕0I$O !" G'r™*98uzkfJĄL`RX32iM;9 Hp}['=X?i`yu$VmfR54|vza\5/$./]!HZ-9=T:XELrj=G dTi$v=uCZ;$A\Yz"ׯ$b&Wbɯ!b`S,f*S,~)ޓApwjsY7{0h~=Z4ꞁ|\}L@7GWLM. n`<(J"Pr&B)JDŽ'ZJ{#aj{ۺ_e=9l.b/)h`F]In?G/ؒ CCgM9(^AZ-~Pj`sRp]W;:.'<՝! T毝 _^-ڛ°6AcN!j Vsy}*Zd(N)rX_}` -EZ}')%vj~᫗g :8\m%0--5=f-Jp("5'W$p*pEc"-\@4R"'WE\)NZ;\)-p21 jm=JHx@-$pNI`8!#{:H+.R"0aLJPt"PWEZ͎pJJOꈁOR \i=v1Zzpe,ѫGPg?9Len{_T.gT3={E v*;ddʲ eVdT]p^䚿Y|F"L ́ 62` FxqNT=2!iWyLZphoas>Klo(ݳTYZrG&R(Sy)!re^ 4^n6 d9a 灡2w< 4Fcb&0dyjԘ8dzno-^ DT,9dyEizQC|@CkЪ[Br[B[#0̓O% 0E(9QyRߒ%G-9:,92r{JP%Z()d 5S.3 PmFe&83pό<}4Cl6JF G`%=Y,6z䉴V)"#Fr0(|Y%H@!#'Q {J IojaI"ϗ Ezg.X9־9}Soj$XGq̭_k>7a؜}YU&Yd2AC!z+V֔yh[4im m8bOmkGY\d^'ޘk57Q:>0RKU"ˍEȍ F:y>rl&Ye%Caؘ8lq݈fʉnhd|cJ \vGk]sl ]$0$Jp8%G5:G"^QH{~EJcZG訦4B6n^o寶[kl?͎<19\KI oM r!\] gWڃj"u?e;!)WRe^般#oid %ٌL(nEb*\RdFbrxHIo4)Ad4i֦YrP;5e]ur@ysw,'G|எ8Lh[䨶p\RUNN>N%_>?Ԣzq 99r: XC,"(pFBJ nשQdTFT,8k<蘭'ULL6_+82Tؘ8q5K9663vBh wύbX8mt[UF֮IUVW;͋\r^OQF#!ᨹFJOAdy&>,Fda(@L`I+ u&smBrHmN Ք8_qy,8Ԇ[d}q,K)SLx@3+A\I4 HŝaJH21CdE4aML,(J&("Q1I;06&z}x7j={ٻӭ)W !dT0XWYP,r+Ž=>vI=+ii͓:X#n-!5I%p$&3F(Tdw\&Yxd)ϥ Jxu'S>i@j]JE76w/L*&Vj^k7Q_[BSyǛP%e1--*7)rm1cd ƽ[ك^Hf`z\ mwY29$\л?: wuT@ZK :Hf2X9פɑ"$%d'ׇ<9 zB|Cqhp";n2ց1:)6j Mp`CRp2Y6(2kAN.Mϵ4mH5gIH rcD Vyl %9(9ExH*.ÅOr#YdG㵟џ_7f/f h@\+{"iԹ\_L9 aIy ]T u޸9/0/?t>w2i`C񈮫ģy`Nagܐ9Ppvv\37+߻;RVgӟFl %RqܗiW77рt$EG_g)\}.q~\ս:7(~;|*D \MQox6)D Nkc4B1g_oiXTjA,hR3.^vݿO}|v}) rG^xdٹ"ho /~tR[dlYKB-m -]-k7#fO1k,XV~n:6WѪ)U`[vrYKV!M54,izJշCB);\!X]e|EʺN|?tw߿+?|św?]p0oЪ'Q 0-A-¿3_4?n4647mZܤid'I./i !Rfm7VZ7]w>)Nk8M*A;qjvzu@ta&U5bрei6wpu%;{g?lLU9w8+oqJF]݁Xv0?'I("Y1Deoncɵ7_9lN#Fi3HVމ!A 2T4-L^10+[wG\ժ1o 7h>18wY,t99GTCCa'xcsb'OEǂ&nw[QԑQ/uzlo>Yfu}n9ꄰIP!bj45F-yʐBbe*r9F*'Ñn\!B.~23\8ԂtR^]= gY5(~hRɊJN.m^+$۠湒zE}ӧ]ɜ(ZzN*_Sqtj|q>Qcz^v9|KRBXY\j{ }j˭d+VPNL".SIYUSt) kStZkdDN,} J\Pb6أnb YbbkŤUfQ%fRIu%~VBxa!C5|A{!BBTdL"Yr/\{YVZp`A";c [BK7OM?v;mm'rI (nu R:IIŢHdD[&Y-s+88ft D4"V3(Gbc5g"xM( O%7%Xr%Aj&48q i^JF̱%M &R`$L(Z4Le]|YѰ5&Άzv|Was+&*=>ޙ^PB<3dd q)0Z 0`$*Ф PoZXbVZGp$hd>HX kkADωT1FNjǏ5h0LsC"0t^EZiEa6E83"r pUЈW4b9gX.hΥK\h,,qBDc^xb Ju@X޵6r#/N|? w`7lIrh$G= bwKe=eIviQ"Y*X;i˸#tCap"}696k!HGvHT51s8WHctݸ9- u1?H :q)%V5S*#dG(tci?'[UW8e,(11{lM, =!ϑdq`GKd'cG$cPI,ګ *kkz}fPx@pEF{K =4YhU~T7J-cс)'qaWe,!]Znu '1beP]<1c|Ci ߙz2 g7`|Q-Hrҗq;pET!2GC_#O mRy %1iw) V̈,wX+ 6 bAALH ϱz/ƄRn&x92J#@OqGREbH2e#g&" t_BRښ|~2ey{'$Z;ֵLDK?g>ic.7E>UEg/%dB[K%Rģ@X4K%?֤'ܩ虰IaY FК͂B-!OX^0bQG" `Dꥦh*a$EV 1g6rZaqh/Gۆî[BvfalKs3XK֭})u9f2`4(`ao%J#(sI$2\S$:Iqhcy-``(3*hclrDRA Ut(llv 7? )}vեl_}OQp_Wtb`-T@_ߠg8&pG{sʷwzZ w7Cmncp'v]On\֓XK''#W:/U`氬NnRnd =!"=AX{p QDpC@J,TɁ8CndQ@;U>H9JݭL-6W\MQ*Pؼ`͠+3-ӥQ?-N.WeewD2;pYwB.,XPlN`mX^"igܠ28OǬf턷1Ek6Q㪋n͋yaœy'49jfnCcbim5N'6N2썾Gw 0.#_*gv7j)/kSQr. EE@(Ko)lpUz93{svYPgo T׷AR l0-ЦCkuzg>zgvz?>Ϗ _aCv,>ԏovQLjiQ'LnFq FTr],*1;8/hFhx <*kI'W`N\ĩUcD%\-^#R FAb50Am"}l_+ߕ4%b/J+H3oKA#.SRAA!+_o`ˢv]7Us%{,WcaK1O% &nkV֮͋=7XVIgZ1_WI{ɱ;T͗ZrV:<9XJMl1;\/~ϩ*U.z`hͶC&s3^FϺ!ϯSTm8Μ;J`~btI5!A=.Q `e0Umk0m? 1S%/JZZc}BgK`Nl)ة-%jl HugKl($:!9'W\v2GڹkG WJ:zp7wp>[ׅhkopuYXLG7=) +m-FIlaqKlY*/)tn7TEc3vu:x u 2F00_ 0oɸ9*faMNXrf9| *z?UZn7?3&~,dD9rLG eZ30{-#chnDHyO^UؓїuTQ.e$W %5J$c#%)!nUdka9N`.0 rJ1,,hT.sBllp5 TӌJ7;J(**yjEdtw=쭂/,<9.7Jz=vObAVT~m cﴋh(VT`M3F1@B C0GXG+CVdJF( |9ťڂ(b.x5sf"TndFfd\o qƶXH\틢YOnRM% ګߍݤ?^=u/)u GlJ=O'R,b3"f#g3".&:l\-.̸:\p!k{8=,ny_$|PwIUV\*(beнy"ʫ+xb'Q)Wa\Yg]TkL?!Oh:qJ$)m^)s@Їo!ubj{ȑ_ewm/ŷ" nfn0 m$+vKeeEnr"YfX,cʶ,m8TP+&]6 b:n4It*cg+ 2)[2p۲|mC*-:yR%B`ڃN I5Jb^~ \" ?P6|gKTvyȯ4. ɂ:4gx>y' hWqւ^Lb@LWj(^%OESl{(@5%Oibs6-9NxjK{?|[Z,\1evS-e`n4:YDA:/8 )KRL*h@ 9u( &jk ǘ/rDMF 8MPE`m$^L-i8M>s0-fEaȦXQRZY~H5mqb3۸EGPʖ0bWm0S!w J$Q8$wL !r`֔e)/TvY9 IE'١AsEPn DXbع5BQI#<`4/m|zYԈ1N:p"Ah&5,0ɬu@ɼYglg^Y=P'Ǜ* dzm*'ZLTQI}"R3CыDR YUЫ@*kzyǿc/spP`UV!*SϢP^DsB P3磞'$ 5&*Eh,)(+\,'6ș`#nYdFttqyp<&Nt);HVywS;g,Zr;a0^^`YS\CYE&֎LƂ%Nc,J fŭV\vq"h H)XD6HP'D4d'e靉eהє&Jfì 4)>YfR9 U9 nA`6BBB*dV1yelf97$_8f&f5F7>8u헔;U=0rfccȲ,L>BcXD^%SẀJdH1(ultldS7~G_* ΠD@wò!t\ّ/iMJ" lWl\9fGLGu]wmv͵uqS]=ym|NP0$҂*Z2wVjBh8FĜׁ+=PdzNw8>ćN[^ZgWĞdr&:82s[Ad.xÜa x}PoA:Ea4 c0!l,O2;PYSh"r*`h耵~>W?ZCd>}6_xrF\i0e^>Gb Vx*3 onPH338P$Q(YGcC@N;byWniPZ(Pj`"7Y}T$^\0 4 \ Gq#+Ox8EK /X*0;xaf4^v,ާLj~)u}xԗ7d p-сb3.88gFKe<lN\i5Cq]>2c9Q3א[.CNE6>>*)qc;m Qo?LՃM 'j7kO(=h7#Y~mo,.n ܱ^><4=+U^s@tQ&?^uǯnm -5#ڛQCa񤞣K/K>.=jsx6oz8mZ[lsˇlkY5jdå LT}1+GIwc+uoTӘWSp ?㻣|ѿqaѻz` A" CV|g4 ]5M-itԋߦ]#{K}԰U5ƕeH.>fmv+^)NW6e9ȏ -BblET!*;^VRųTO!n0/}Yky6ru,yrlLUSWZ&2?j[s ?["RdN%f-AY_ZS?;q29;I9ۤNJg$2HV:$DBaZfK-K9h);{=w閧xp0DrdsxIlsI3d%7D|#m#YkKsZ|ǐ~~. s*WG`0X6ur=woW߼OǓUd? 0ü]cHGuj@]c͋jazwTmLi5OH/q^}JNtF eVI)C+9EQ_s |aUգe݋6t]h@uc#'BZwi]aqNŔG!+]e[9aS:d St'kTng{t;ۋ=@`RxRq 2 DH% `f`QDfL G%G&Cƹ )@.2+[%3`.$uљ8[5s]])7,Ѿt~9(+I]-*\Gs9YU7`Dž޾56\Vȍ& R멝?PQJ_JWMSwgT*sy@{JVTZԽj*|動-jVrxN&[){ی8o{\|dzfvwû} A|emﮖTe6nبORo4/¶Pus'fʣ,"ʏ_pZhN>ft%xXϠ##?hyޥq?Y42B8& x 2HK3\9[[Nȡ,㝒RR*XV*luTI^wژE$R \7Lf1)LIYho{̴6>x5yހg' KGzG!{W' `\Ad ** %B  }U ֳ摞CG F9„R2d .AB霹,ǸV9kdN}H7! GWQݠ&}\cjSyP-b}:Yt-,|9y^4/>.L)S%b-es,fPDIFqsjSv:q u{tdۋ(-#S`: AEdZ2d%rI!$WN~4CWk3B86ΪcKټ_Dbn^Da ࣇO1-SR?"$UB".6 ER{B"%>7P2#yvq"nسYAH*d\0+C~Fgc_lUIhspr HIt( ,#b-[ LZ䖦e2NЙfS-<V"Ȭrglgc||J2Bۏ]?lEŶb÷c)s-~KqF+2~`E:.]^A1]\dsK1a ԸO/CoM򏹀 , w܎@ɣz\ i.LUB$8gA$ nm6?Kbڢ> 1v?\( ߪ,Nn:iP+H!QhP,`E3%-C)q"YZϚw. #[\mp#t2AU#TÝ'qYy ocJؤ5ǁ13+GNPiiD4N:/Vn.ᬝ噡Iwp.ͱCG'BG9^iO?h"lLRpԯ& QpBQ/:pԁӂ#('sꅊ{GA2e)C9[8P'G-Ku/(j_=w[,Z]O _=5OʧKb|a -BGZ B#FRV60Ϟ AK:{7l qD3-E\gh`ijQcvhNfjEmRu%*ZFAU&Xg2٨LjTN]Hue(_2\\ˡ\UoEH󦷌KlIEd""Hw-MrȂK=JPU{ϛ3" {Fj:,Qә\v5T*";54/1hS_k&53>b™ "-սBKHPsc-šuYuje놪FS*rgY ,ʠ6ܒ BZF<a0N+ƅl9#L])olP ㄡ>PDRJ ɧM3j0~p3- vsSpDLILCYfnӕlXvlt-%rr'w*xp -*'z/U1I%ӌ%Ө<9PQ2 \56^8v˨BdE `(/ aYi߇ GRB OLKd&<r'%PM TZ =pH M|CbBʜc$Gb%hNS H rУNq14$k=&(XgVj{Uv;kE@s!`a^Y1) VE9:HB^tQC[9Vz3+ ~;P"bq~>ә{6 %E)magv"9xM˝mοe֣ se8 ף>n9ϲ]TIŨRx5u͕P)kr\ZʅWAαӜcg}5gkY}Wz2"i%NS pSơEKTrY Hbw uDg";%X:TX鍆l&F- BT\f T XO@cqNj8vZXwa4@ATTQIe')MDtYR";7hܠ"%KKXiY)*;<ш"hoh"'BҔ 6Qh'H > {+ ;^ Y[ 7 %PIs7A9[h'rї2x-8KzMY)70>MLzyu2RzJ`(ifzY}n'FPCV~Ӡq9[YnVvQ6&+ŻK;\,  BFv$!nV?4,RG U;h8/nUir jGedI֍Z;WA.F|(q.!y`o&R/{_3iSCN__";巟???O?wweqmu0 D=d ~yТq\]&g]-.&\3=Em/vR8C2;(OݞVq?_AQ̯]|EI?֨NnJU(UF Ovo5[16HXqvv1[S(%픝fe8?yѷ'P_󺃱&gmeQ@'F5BY0o\<])ߔ^Qǹ 4 _$1QDDɀR1 a) QIqm3;2E)I _\$Pj1'@ *HyiS40\t@S_*vPBRRc[L×F.j ^Ԗ4KJ[iN"G,XP./ k";%$Rp]!Q!M3!Qzzyu(CY[Qf@ g7ii͖R7nBKgpʇmdɤB t( 63~-]W]ejKZ=ۙȍMFS,Olks |<.u9ܺ=.pE-(njb[]Wwyf~F?zv7]x~4LEe}uKƫR]zC݂5ӟ75mfMg֘vȖ槱\ms>4'ڂV9VC#a=ƥe6-mݵUZ*#h-K븱Fh=B(5=$F^i!hA|ґh\bŖ='3 xNLS"'9CNK?US{pS$A;/(oBQ(5D% <2."CRC8dPA}6)&o I $r< 69Q IrWD ;ϋy>;'ޫ_C~s&vVWϷf%WOɇKǏ~Wa ut9?/TFOè~\jBR/_S2E8RGQrV+,_,wUp~PO2JxQhKЪj@I}ٞgȋen~]][o[+FNmr8}Pl( }Ev,A.IX˔U83GΥ膹{gԿC/f_hŃZ\=/{n^O3>luVONm/_h5}#z?ԫQI]=Gz7Mie^i!>bJ! EAl_=mS;S+pɑL/i/t2{HJ]O9wd_ʎȚaddOr6,|zGz@ԏy@׸pZJpLʩKB&Y*hzѮ/S"8)v*eP +KUЂ^r6lV4[Dlm]2ko}Lo"5Az< b b*ng>'Hz8O}<'HXbzZ܉`KCM3FFGp1{@@R֊d19-YN)EJ0A̘Ԋ"j+T&m!$gD&2q.3[~VΎ B $}~u=of?5CCכ6K_*?j I`dMHF ѡc:pEãFV0w"hQc$X.7Y,6wI35f&)3R=J8X퀅j.O7 %Gh^d嫥ܔiN1WW4fy$fi@AËR2s]}wx$̂Z5;} Tj-x#߫c¹by<i뛏l? }:żLyO<VT߽vf|#żzn3E4)|vzMOC;!%oU`tY&71 bD2tƍ^-hJ1lһΎ]ir^.{$ lxY7 ]#WG&,CM&~X˫i͒z0;ŔnԳLjm< +Y:9Ng%K{4~e]8\8qT9v_iF2Z+6>.F2yi^|ysTyQ{,^)2.qXzųN]d jE_[<8lxvqgm('p[iO .2 L$y{D1)PL{p{o= R ƜSVD!sNUu?9pvsLT[{e_p|yFSDVMUvHV K PΟ|F%}g2&':/u3W26dbvY8e<2IPEd @8DK!j '!|H'^_,;Tdd |jƟ_KMwz!ʎ%+.\i=DRZIN_`! Z.A* I,{9z{ KDe~7 Ʋg~je= ]Nm8F2-<|/tx %_褼N&Yg|2 KQFp9.q !e,W9 ǣ L/TpP .^ .FJ;'<0XfOk୍(bYI$&ppv4x\mU5 e4|倶kv[E P_Xj:::%<ҭeL2\}2[h-t+ittWU FD~c|>չ8[n;.J֭Zy H1ؐ,'2(nt`A30(\2nשfTFԜV,8`:fIS49KQq%]&%U[3Vv͸EV[B{Ӆi߇|-Wknf{EHnߦa~tn<7o\cP ,'<+kPK!hDfJg3u0"#MC.!%RVW5ب3=fl !e] pkl9Kr(Zֆ݈SHPy)ޖ)ah{%h+>3RRq|U}.x#9C&dȑXtIEPEt|2@:iWQlׇ=_8[k}Ee(FlqcpYO%Zّ|Ƞu@e<BA︲U7ΘH8I "j\>L*)E2!i|pvX=/,5jd_(+ElzAI]@(dpe8Xv}VqWF>72*T{_H^dGg\?F !$;,t&h|apBhEj}r胁W|êĕmϱXKsQzidzc"PQs#% FjCsNQud4;r|edU%~ު.5u"ALë r]$/P%F/߿C+S(K4?M_//'a.V%*wnw{݃_GWBt&p7Ժ_]?1Yч)TdEVDbkrtNv}ӲH7B@Wmfwꬋf1nmP'N׿F!J7/e<>1~ll)?)2uଊ p*h\щls6΂c~􉫉=s3/EG),wQ'@ѣ]YM}Gγ2!%gHȞU1.}?얮Vx< rЦꦌrmLawuG/YtN[:o5kbL.v$QxPo/v0c 0C褤bQ$'Z2j\0K%'MrInGL H79;+nD'- ^x*Iڹ} 1W8o$/j]U9s,;2N 9qeǖ:cI3B3@giLf]|YQYΪ gG9~^^lZt5did \$R3K胃HRad 0`$6)oez_ K AсDKA ='֠V5nƍ'5t/îWlsCDa޵#"mC63{`2|mH[-ْݒ,St0#YfUXΫZ5%D`!Z'IxBTDbR L.%g5SeOvo>oFyD,D .%Tl-q\B#3橋!-A@ mWcyk` ["|[E FK69AFD ͌'ie𘸉ޛD5huQ=`J[!;r(\T z5Ɋ5A{6&rl@"` 610Ë\Y|GBRb[O@+RT ˄?FSx~CR.$3pD2 XH6D! hrQf#)ZoB鲉V[) Dsͼ !<ԖkHqpIN'F(ɢɧqj1r(=n@L2臛; U^k(@8URǵ`PI\-H cPvs(D5uHh4&763s *CCaRd_K*FΖCSS" hLZH;r9ECtM8mCz즣|IT6>P\T#}0m0&%!ÌN%:%vʍ tXTW¨&7!))᫏{Qz#%B iAYb\Q3B.!l&G_ʾ5ΠB@Ph. /*t,J]YYeH&Bbybw׵u9S?﫚 *4R%+S(mjF#Ny*%u|fOG9v9vÿ::=ny-K=+T2_%dE2^G %P!yG[Ո9ɳD%zdd U{iLkChEI*l l&EF{,?@c8!r}q(,u јJ†:r31ܥH|ԂD"˿sTDM pT0cE@ȕ/mU^N)E%hm"ю2.R($r`u#?kso#M(uG{?3g #s9U´7\c@Ysj`ޅ(셙=e~>%X*yC\~쭘6Ʌ<5hdaPN%9:ՊsÑLn&SG0Uso(NdM@xߤD 1Zr;Kӳr{;wEІ?_Mp1UHB> Fj3g1kpp3t):*#G?dۨϪQ'99$,q}4 JczD{Y)Vwj ]ө5Ygv//p9|8̟ޞߟ eޝ|xΟprVԓIIЧϏZǛ͍fh] f\[[ƽ>jbڠ퇅f󏽟TD7nu4 78 l~1՟BEMW˂pkbA?ݸGX;FzJ5Nj-jRvN0<Poݟ𓶍?'g$<PV[VgMr]!mqnuQڄ/XEѴ`o0 tnG_{E[G>6}} _[e;㛎ooАH]Yj#L"+qc%jO8˅\`%&+AVRf/z8,m0 {n23Swf55nyuO&&t|y8|Y3̎1;|9\;mf))y^ˊeIoJJ7(Z6faGYHۖqFߞ9L;)wHRnI h?3T؏#)d 0IM gtg㯙 ³=Ш[k(jɝ֛{fɆ>I `mq&ig°>LPj^Zl 혷1jLfL.z5q3`ţNYtGYg"J}튾?=^x(_t;cB/;}2D.ADR\\{GmLNYL1:UT ]`BHhRbkc4bOO#o#a˪>+b)#BCA/t2H̠C/elS^ p^`\P(&+ሯ" ? ?pu:E2Гr0<'SDP,g;D2#V +5D)}-{*P>xcO<?/ZAϪaJ>/},5T :0wץfYH/D B4hH&F:ZHV^z[%&y CC;Szf}4uarZ_`yk8@ --bB.CtR+4TQg)po<"h%TIn}2յi]q2oLwONzs@97ı>n` ~tl8fKekmBuk Az(&\*i|]džJi*vpUUS Ad+!u)>ë`'?Yy*IKUhtDqRHR{ Tij6.K ;Ǎ4ᣱ&A omT "$y ;qU74% Om]Tw"m\onW|^Q9vZ򽡣rzǧE IQVhE#8+ϙ%Lo`eQ^@죜G잸ҁ <# hl( ^zDyU$ ext`}vE29 |{1̈́!/ FrZ$XlrRVkFX\d\T2Ʉ 2۠oĠi6Z6yZl6pP쒕o$;}w:)F~a ʌSV_e+m"WYUYSK1017BZ|/\_66[7_EO(%u_vH념BLxx,QBiO6U&SYfb%Tt0"tJ ]CPiOs%[ zΚ$6`hz4P21Cpjk %Չl.ygCg'9[ho:p:۠RǍ+f axΖoAGuƞ&#ц>3~l솮-}ףiݧ[tcR"ג6K.]Bsk1WIu5k]0qvZg^ʦs{u{%{rzEFݜhY[0m^KtFo}w:wmMq$zᆺdUV)Bv}qv}+;bQ,h` ϟ40@oKwѕ]/2zs?UYu*ڞ:}3ж #icPX&z2=`+Z=`+/G(_'w͌[7 ""24^5~XxR+N/Ćm!;틧n<^S/"%$D&D탒s:M 5,(AFFL!HJchiM>^VM%آsԯ7rnUEᾹ64ۗQa;\J(5lS \x?aMX[7?ziFqu2ge%@)UCƊIy )*x46&R¡P. ۛhzAtV)jL$:CeB Y4 mr!:`h1*gAhe$ &+G lKu]aHZ{YSZ3EZ=,#6SHL&R씕"b *sl6DO\ӏ=#+P!XEM9`pQÒ2A%9J` j' ~Rxv @eV" fl` D!lfr8h(*OܩH]%ߌ׋xEsT@jˢ=qDJErbH+1&QLLjvvBM=#_t[76[C<<{dd<:4G_ˠdO&* h}qE+H -^A:z;J~K MaMaܝCv2gͷK;3Z Q Cn#9)lrx3^RRU=& $sZV ayS_wJoߔE))nHIJYK:EY9Ƨ MrEX JYb2Kj-|`x^R<9S焑3 jɗ=ȹٴv9>E׻['kh$K92/86#;):{glH cb3)UOnN 6le L%fA(1ֆl&ŎK_o/r<=$e{$%& 2rbcF HyߤMfBvc6]LҕD/:+|ID)%FbQ;U|&#QP\ Y+c &Hkƃ`]kb2E2vBVcڋ> {S mߊ`|ppCGEpkOlUTUnm=:Z.#9c{+{OBF[0zIB}U[TEǀ£(jcB2ְ9) @~_NB`=V7$T @V=x@mq:Ьv:[$ \JDd%KATѡ$%RNSAP0超A!M*KPΓʦeۑcLG蒷ɓ`d)f#={,yr~.}R!1k'Cj :Kɿ1KE!ʝ?r'ylQ>?W՘VJB'ĺde!}8LC?Qοg>貸F K`n*y uZ~ܹNشQ])Zc{VknNsNOY ~ޥo?n2$!(M~zrmܒ| 2J޴hrZɗ~\G!S ;}Ъ*ʾYv{/mLݪw:u񄋧b ;V oP;h>zg66\۳wĀ*_G2VVo+{!PČ60LK;k&m;gsLQ9EVbir(^ՒM狅~syt,vΪY/5kZf9ԉ _NR}ٔW٩=+wfEAh^=l4ICf??o/w޿e^SJJ' vc~݈|7SC_S|u>z|y5b9oЎҲǝ_ξFTDS7LGtUu=;y'L6Y]t'R+TzeIU[A tŅq:qJ2]s"As樔)>*Y6 ߵ1Q1~v#Y$9՟:'az_+e;uZ{̐/E O2EҪanVbҚ<8uY-nyoBH")~j³RJdG1P(ȬdQeM^6|Ao©,HЩy3q?הwЇz; :MkD-=aIhr yp&ͣ앚A#܋Fm-UY1A1@9'vcʑ( d![IHI5Z 1<()~D(ڦ,gaԊb4GV}GHg97+Iqky|}&_z%Uq? ed3,ӏ9:9=̃T=ynG seX2 A&ꥊX'G7q㑤<4|p%st|8YsCC7u+PԷg77߂ȞRJ |G>w5.$%YBK-\K";aM x_jNܳ.X3?SḒ WW;=IJQmμб@Q77ox 5o:c7mQ8h85 /JDkwŗ[2:[mx940jTR49HPFI R\Lvܦ"kA\o Nnپ&<}͞{=lPųe̬(!# 2J* U1FAG#M9A*rlU9xN.'o^ڞq'|늣Tv}g8H@ {+,p2 c^:i`<1}D y(tA&Q&Dé;Z[fx]&ڨroQ$@*兎Yj ha"ǖTQXK-ON#&$eQX(ԎY2JKE$j=Jߐ7rntW3ظ|/>p>?6;I)7lvcٴKٯ7y$cS4}Lƴsh.5㳃Ѥq&7= BL @&(b܀GNz%h{}P6416):aKg2]4PtvF{P<ݰm'=g,m u[7C;ȬK|.]Pe/.|z%kr[lXݷ~}}R_u0+h!w:#2gv7M`q&*mbjRcL/ɣu+ ?xι)Qj K՛}v @@T މhflP$invBZmG הv,v\m>ad Q`R6H\HN*N;1ͥ&?r>Aݬ!.MYr_$f6'#@sG{{GF> v-SF=iFZkbBi}s_%"3ۻg)V@%i'ڀ2p ?2"`Z KmLjn,ђ+lݨރ;9/PJzz0B ΰ3:"6JXliA-9+KǭUZ7H [Ǟy y0vP.}f$k!rA6QΊ^@'G=9:.9rHDŽH=KcIU>˜Qd ,3!3IP`DOyD8ضӮb2]ow~k|ڜf*>_/I;Hŕ`Ez<QqYk֐pޙBDk2V`qrW3y84޴NCGjKi~`Yck^3oߔ2~ݐ._4: [yƴ.E&f4؀QBS6eoM!;6ﴜ}SZH8_ycߓ XĀhtG,[FH!FϕB.k < ^Gc0?3UiWHryǙO|]6<3*(| $4.j1Q:1RXe$Z19bѬp|Хe"74)O7 O"+d+3MFjt2gI{A,'Et1LrTl#@j4|_wAB3sm٦19k#pLC$WEKBAqɧ@ggc NWJ(0J&9ں cd%a8YrQ"KN[>TDp9w`sB&َYQ ʼnzXrG]ٚ%_lw);:M oAz:{㈝8*PqNJnSqeQ$"+Z%. 3\V:݆L()l#SQ`cg<*or̒ukx+婠vq*-zF%i{yZcN/?1^hG#i13mx0*fmlHiLHS$d*d8F !pȩFBeqd|&ʜ *C02צȵy"OߍMnnl?of^(*Mux_C?{ݦ8`曄Ӓ|H|H.?9b"gL N]θX)d/g包13bgW\cZ)ͩUR#\yZT+puKPj?ugf1ه62y >8`d<7DqppCwoR2ʻwowaQ4 <: &x7_h̩u{`<šꘑ1#OQ+]aLi&l6(jfaޫ_\>F"@g~3$ 4ʩW>(5<~-6ԗF1fA?__TApOyde4zQMQ$ PdU?d֐LJjbTb[PAϜ8#O B'WU\H *kVn]Kp+ID[V;:Pc|{M%qvZlr}Ia@° ʟ?tymSu P0>`zRݥ^7(i.҅>Ⱥפ&﷐}ח߿afެ]O8WCeK m[U G'hX}3+8ȴm %,g\qA$Vhy!v,Bjo) (7mG['BHt0rX*N;1iU"K터&涮\HC> u,%8[9QJ'ds)-/L=YCS*w/U66e쵳nfZ6gѰN>zy$i~hYT`{,|ux35gW~r+%4*Ҁ1Ff˜B~S;.]`'A?34\e"ogEc$!1ѯD`u0BFK?QBV0KVB`TdBS#D\9/!5!5o|}q;_}6 ]whn4nv5oR5٭2Ԋ&,O K A•_!T ȦeA{,dWsd `1 ȳ1 LܴB/Zc.\#ň痻ŝ[.b~x"$/pΩd)l'dNZIZ858 9\Aѡyb8=sM5+mU8xsB%Dj<6D&(1$<`3@0=U2Sl8:"S|& ʞ(]m!Yu2/O?=N kPNA=m N5 Y]`uPɛF8'Xlba.Q>kgMHӳv~YT— zgEdZ0`ysvxmEd W4%ݧ"ug6v|~}n6IXmU_x~ϑOYt>O_h*K֗\I4*kbBi,}Y>aj{> 8m BB &n6Zɤrlݍ-$s9^u  k Z\Ti^'Jy5f a }p/X~B^*GG4 %f"vz逹G0F m7 AcHNV"MRh~bCf]F4̴kh>3m:Ҹ&"k 8ᢓnSKIB9{,R,CM6&3]N=$98Y hmEa  Pr0h80T$(-[@Lq6)@td#AP}f iɑetu`h+q6Tz{2Ӛ'ֽC9LOqugO\*׷YOVənmM}KZ |s9?+_ ZHOx6.J't^kpP!li%bDaɹ( < jkl])L([iUPIĹ3*|a3͸/d}7+2UGmovqHC .f'fӧ;zb$ifZr_hE=/xt +ER}03HC٬ @H!/6ӎC޳#;m, s䝫6gLwFчe7k?OEǵiT3Χk`JeCWv.?ETvd{@|C)#DWWėf$&gGy?tojwsoυvq~°x<>|mm WFXnd,;eeE<`6JcdJ'"fl I142IF8IJx>fd/rkL=57z>Tʷ J2MO<Ӵ-aݨw{I-;Pע>[^E@] Tދh*ł`qnf[][wVPu&(Cbz쇎nyۈ/x@pϪR7IȔ'a2Z݋wt1yzW%ٯ׿]͋e2GgA&pc0yS犣]szx zfrVwOeF_iy7gG^]g |~lh9_악"VՑ߾: hb[OnI{: l\Ul'Xgb=Ћ>ӳ]gqk v~mn}Vlz]=Z[Hذ4զʤrrܤF;^66//O~^}wo~C __u#pF'"ӃW?~B׺Uߺk>]c˧^ >ߏ̇Ai̬)foEы~n:]ΙˤET|rL=8L5roo^{n~aTo[x z Ƕyw uY 8((|QwŹ2^s*!3A*0QnHvκ߲*ZN);|ȥEB G1pԭVe(N[Fut_f t]lGLHKK#Jp~oR.N_:OϬy.6+~\Mw4o]K!w}@hwӵwnqbT*Z-;zw{w<n4;A;ov~w7GWܭ'';[_zv^Y;΁֞Z7zGF5׏uhۛ{K~ М5jQDo\ @C+. eǤ=ƥʡ%~0HV -3AKIlل|I-SFf>P@M9%1bQkI)PK"E/[h?jL=*RliYD^xI& ~:)~ҩF5VdE1+BAh E/mLpXs/U-vjNԀ,$SLhK6}&T I}H) mr!:͖bhjgA(YI:]Wl٬B ey.i O謍D𪱝5gO;q0xʗrk3%VΉZuL&)d&e1L.8Sa :9GhiQϬ<#+ u}\)'\L11|[ GA(PJg'E5 $-,3SȊ&2ĐVbҘD1y+",[A_`7Sᙑ`} ZʠΞL. P9r9FWDhcƺ!k!-Z'~}k ab|K߲d6D 8Y$&lyQmU/|($,)vtҶsGJɐƚ I?Fq Iq IqcH`-2 WAdSMrE JYPbiS{R$ !C }=^Y:'L}W׉6tpuVS+@ 3+ t%[y?ck5:٬+gwZJOHFX D[_hiRكƙ=3L2Hu!/>܂`\#N M'62Ui:ԍy'_ K9G# -G#sa=]M6tOc°f5+Ux] ROJB:CSE ^#J ]%\ׅZIOJ:G0S=^3tJ 5t 3+m|]q/Lmoh/A?܋<mz=k~)LqXtW7i ?]S\A5qv0KӒ)E 0|@=kC-9,%Xr SG8ܒK(l,3B>~ҺUBɩUB)?9H& Ն.E.tvtP6tu>t%3G:mJp jOJ:CL>otC SkCW 1>vNwPF]#]) ӖtLq}|W .1в7J:CҜRXU+U]*'PҒltlz|dBv'7ŽhّVwCO;njߦ ,jDWX"TJpm S624t,tEՈ`U+j#E~ %s+M)++p|S {B; ehkC ui@[p4PFU#M35+,Fb7#ZN^U&5tutfn9cWкUBũQWHW yg #o}` 3󗿔nYk-l3I1 '?xxYEٯP Qƃ/BW]P'=_?o˵hpŸh_ QYV҆-F.^eWPѲ8U18xjMͯC3obDC2[3jv)MEuޥfW˯#M8giiIYƥuLXBwb/^wHa.MW늖iBOo.舳(OHM .G8hr b 3G E^i/FaCj_WiVT47[O q^ zF2Dzs 4:ۋIռ^w_\6c&*-'vPE(, s3 UwQ3JV|j2;MQ\ LQ;20ZAde׳ 8q;Їg>xIW&]_~x th T!tJNjs2ncʢsD[]@MOru ~S"M7ox0zfXxmE6/}_Tjhփ[%l]/jsz]0+49Q7tmXMj Ng BAZ FsI, ʀx{sfomr=p] Z'].:sL(QmGs+:2,"怂kZdS SD#YS*ᤶO2ߧ#oJ]. -۹36O5K4XM/Z<:A-/a@gDEgcvЛ܄M.wt-HE>EzY=s8. 5Y_ҝ,^{*M@"ކoyBt$f;/r^],!a87oP V[\FhAbaf}x`ib[?׾MmQljtKBK׃'xrBSa TQ{9\YtZo!xEa"/-M/O0_bƽ-(|i}HwʗZ`Kl;0'B ,\~a{e>@pMJ>诩 jQZA-ͳޫB 4!.PHx3jg>sWL#,8ZIF)IenULqm@(`*g羍nOz:<"0|6zW\~k D _ZOC%s)li|闛?(=n4, .3VK kL2bqT0D[ގ6,bx`~maUS{[\Ǭu"XXw %1iw) sfDd;wQ0A,)WH c8*qCg+ }2EOY%pf\v݌Z 3j uK4Պm6| R;R#>jE؊7PSy_|4HS!Rڭ |wPѯnps3AeN]0 !mWQW ?iW NEOD b~ϓNk:X; 6 +tƃe& ij{ӟs B_'Hۼ=eI QM;/O^=֔T7fukd~3^ByBNgv>_U4WI6i^۵wA[Rx$>b|BKԩH(Ys>,hJJȪr<_w{Ϯ>Ϫw3(:K|&)9‚)Q 5p'/6v;Y;y{wXw'GopgzȳhE-J5 #bs'Zw2aR/:#BXHԔM1h#2&"];28 <hwLr0b3]6d/ond|=M3wEPH7$1΅̘,Qx/pP,ÁH"DPyT:㫕 XHǞ =^ qg%˸wlWqb[oXL{}np7}9(ywPM w@I}3"^@ʳ& &W:.&8 !"*fHۨdA!dPC<`4 ly7iAĚQBB!Q2+ey*a$׍~u'AG#3GGL0vTQpIC7)o4")$}y (6BVͨl/͠0XWņ1B8$1DQՖ]>dEz Ox~p#S N nh*F['rqJiv灰wS<][>7~zH4٤O&F&RsdcXUgswquBh#QJ̻ )3q|:jMu_jN:::adhQkf>WQ9BQ04gZ㸑2M,>4`N\ >Ҍ2#Q_硑#JY jVU_1RԸm orzpc"c,&%Ļ|R1% QPKѦ\!:GRw}5q}NwĶ7{X*sg 7?QSpb9jI,zDF༳!\+ {uᝣ/ĠMg4>rG[OZl\BO޾y 7gG|{c X.jVG{:DaS&8֋(0 )ްyepC*vܲJb|u{Gq va7яhCw[ ooIaۿ݇\XRC i0Vk\Y>a;;N;RRmqŀ1[6tVR/8qZeFF%գ$4{#oL>fa* :Xo@PЭ٤eN9' ̶=Z§ו_o?Z,z]52>^u:WJf_ ϦJhixb@VCC94 `(*>W$# +kZ#;dW)fW:Bvt6Qs!wlwݬod+C`fZ%X ǣjtlj2hd#ө\uuL127g~vFY<f*U77S7dFo+IْFm+u\7p]zX*wTwҶ+uOb(c;ߔ]-ؚ1~p9xKY!ܺtV!/B_Nӳ{%xK@5^^5\ ށ ًk?ZI3f]?3l\M+Z!sgkvuxc?=k*Lߜqҏt<ߊ`/zj ݎ\{ߟvԮ ?4N(B1euAڡ3;%rG[/?{>;x׹מ/tzP]{Yй=ާLO-?;sMrZ8{K`ªs7w=]~E^|~.s1bwXPӳh:</_?mg3>( kV{l$rpn(<4nz} 0ʄy8S/@ƕ 5z^V)_ pw'\]Vܩ ax 5Q"#EdKIu0!*@Rɪ 9ev L.Aоh%0bQkI)`%ZGJi/%FS SflP̏uUjײE}_ SP%q9UʛW~aNZjq6CtAkR&V|_~EuS\hLk@5VdE13BAh E/mLpXӒ*Lrފ]`h"t ֦OQa"Ite@L!i i<Y% ӵ3uCh,o8-\m^5fl(g\YǨ1*߾YD /)rNԂE`2(H!QV Sƥ'x :Aѽ߿+ ?*JbF z_Wht119 L\TXAtzIn@)) HpBh3#H!tJStf!{ɮi[ƹnACf/zg[ٌSȌ&2ĐVbҘD1lB]/o"&|gGkZ D࢐/S(ctEؾţx,Nui![JnܱG=֍5apQwT[Sx}6S$j" Rz,%'A 6Y 9H_sQH XSBrCx4 ay-v;$E΅ Iq}H`-2l9Ƨ T+OA죳Ŷ{ͥHBeʒg02tF5 LlSAy0(n{xoU5TLJZV92/8ЙGuRTؐf΀(ާƤ o2K&hUt>\pQ)C600fWUa픏!H/)#)0)-[+KO@B Hj u&¡|aYddB͏' uFD"t1IWBkW%y d!zs[U(2K2F^,c1`d&oj^Q!2C2VIB֞paڛ}V] ݁dhmn܁[xl   .}Q%њwοUvIWCk}$*cAj2:s2靲X,ན2 l 9{ws,ǿ:~fY.1Y!..ZB m% J#۪N1BF1( |PWA6'@KhwhdKP',L&yL$I2UUF\y&4:}7ՇCل-b:\JDd)Ea5$7TQ$)j)ƖE ߻Aw 3JAYjp Bک $KF$OJh#K6dɓ mB!PH0f`YWj_.}.B *r=WW[:?tЄX *s/X }}$$yLcN}*,#1\"szޓ́ TN[JItJ|ozjxp~oYAAWMQ$@W>M.$7|A#V򥴿.gqE`D:_{:Ngg_u~nKCM=ԪvK<.\*+fōogW-+RhupOfW7#sяEḔuuVe8y\@w ;]0H,rdCYZg$L0)RQǬRxҞC*`2K aԖ*57;;X5hoBDR+|SIdB 3),`C&P;UO|! 92".>zxuVoUO\,J> oG<Ňi6èsfj:45?8,@0JG(b(L#f oC>ʹ<ć<$crLB<G޵#"ef>6'ݳilw>/E[,9ĽţmI"C["Bs6h o[1 B2Q?շ Ubvmy Z{D ;c,w/#hӹmZwM~nݿK}ՁǓN8Q%Vȵe \/VVetդq8 l&X;uX’tn_}Pʧ4[ԼTr|k=]z6s~Oy-'_Pqtp5 Lvm{n>5gݟ֛ouX[fi״k];M3avȖVlFI<@!/$Dd.$D#r |YgDB"_H |!/$N*$DB"_V[!/$DB"_H䋳&gsgKY̳=w8L|`YiY^?߼"o^O޹чCŏ~IsN.O$26zo8ӓTWY\蓤Tb1buQKGr+h8s7[{lz=fyԹ?Sj<zl P\vg |X~sEnѯםoEgR{7ً=\׏;a4Qt&7ڏN^'[Y^9uǫ[B%5!{Hgx:|J*cP+*Y@T0tȣvگ9":S<S w "UDiCR`&S@Iz$˭Ձ$h#f^КՖkB8Jg#dHӖlVr^zP3s:d'*\0$|xF*NA­Y2X6KS#`a.+hVJA 3IZj!)gECt96QCb=yruUގsIH!`aA99 I-VԵ 8g#ZcCw]nq}ښ^_}Tn/s{@KyӇ9[8%"hlWVpV/)S'>ۚ|{=[>ztQk1%Ykf4딧RRG!:rCFξ#fUZOV$qP *io)gzUȉp>GׁrӇ/Y%v]J#f״Z~}:>]BJOǽIئ68}%:WD. B`xRcN; Cf+u)=$)=-VWj ]S9Ie3N^_t۟~8O~??x?ǟ?Qfهw?71̪(zX>jȼڠZ/ ?v~c?MfgA\OtZhw3;qU;iT'zsUUA h:9=*N6S!ȉc AJJ۩+pv8ow{8gMcO3I0 x$1+#1C7u0'J)}FVGM"a &3bN3T(4SWD%ōJb}ӑ~>$g#϶ ZhJ"6TJE1 ]G#у41-\Br uTVZ/$.WQ/k] Xo jq79Yn;#4ltdI2=sv\{6j@8&/S>tx e6ĀUXY-ELUNė?݂qNѫ0L O:(G,~=3J醭D)`T-(6s+ᬪbZb`B肣^Z9z~UPF~utO𙁮Ͷ !-e4oiM)`[S!F'*a,3*hODS4ɑgtGԹ%lpGYvC_.8퉗罥BexDQP!SKaJ}<r،xx[59Ӆ\߮wɎ `4{kk| )NxnGc:3~nW=u6LJ\ V \B} ]] Wk41 t\^_=gyGkGKVX]7j^\4y5/\?fO׻-͜v^z ēn8:ty;v׶^98ϊn롸Z,S-~kڎϵڮi&/ fT;dK}r+ls#滹KYlh{6NBLʁCg=w8n%7vw!mmw *fq^&p\PM^_^VK'U3W ;^Ÿ{;{d dГ1ܳ#1KwD2#V ]VjR@EL)$K[RRƹt[$)N$J"V9cBbޣH9QXBGs)984PkڎW#[\9x_<;KfJ51mPh&Xϡ ) p gi>vB,PP -_=;!#faoG”T(UFz!RVR`HG+ĺKo$A0E-=X>b.=[%Jhiф2D'RȣRFtJTģP%FKR;Jg SjLkޓeZϓ ϫ;Ϗ%>j:l˱Q[2]gGZz=D~!9d/'2ٷXD*9 $wC'o = N#'di~B< S1Pa<:)SIx528}RDy%~  ?5*|뛀q4.m^!8'!(@S#%yn1SoǽϧAMi#\ߑ<d]B计w `#Y`?ILPOO}Mgc+%Rk/,bF?>п}W}0yc#L.m w./>Ɠ}!\l|~;T7 R겠I.'>\R+E]޺2wbQOG59xR?uǯϸy2}bXgQ+)OאwHڢZ<>6u[J7*וf߽aľFv]PO{wppYWmq\XWS*׬)W ܶo5E*i7ߝ>ݏ?ZO3?rnf:w)tMۥzFn>)iyW]Ϧ;i.ت=nOo=p5]v;TؤE~eR5/h,dռ,I EΦnYae;mVo53]xA;xY"E@QHU\HgeY?R1ɚ:1S4òDe鞹ύ^Rćz*Uà=j͉?Ϧp"$BЉN{J]y[' "I{Q6B-IxYz"(TdKҘPP2}h .cUHч@ \rT@w& ic3s6v'dm(ӪS2Rh(e3/Wu@*O$LVdJs„hxJ$"  b6*@-e/#Ĭ &;W̹4V7aZ'՛O8/>J^(="l?u;,C6kOs lmKˮF~yP=6E eZbމy6. ҏZ,s:`yTR*!kttu}2CUH_rOXП}/eGGH/ (x1j $?\pO~Yvl2WVNEo ŖiC85s.A;z3/K%E5=eA١ew۲`]"1YvZ92Joڛ'nbkTTuTsuh;ByVAѼES;CZ'gUs,Tk Zizp=b6>>bkIUR^"\iaimzMmYi<>?߿Ytu׳M'?iV7vC:_vX{s"LNHVN8Uxbu@V뼟ko|g`r5X%2RIx]Qd+,-c`kd`ZyD+pv -Zi}V/r wH s8t(pU{P\S8 ᰫj. ڽ_Vu+aӫWB)%]=` W3 p kW~z\mzF){%ܛ??Woe2y Qg݇O0vI6/j  Wڤ.}"lGŅN,;U,96AΣ4TЇ2WkZ0տĩ^3y̜%xؠ#r34țu霷].d뻀hk2)C$["lp2!qC2蚵㳖,1 C5S2N~E\V({%}jJI9#Kͥ jS+AG,\"RҐ6J-a0JN٘53~rĘ8IRtv7e" җb~MŒ/>}jN3~3O+ Z J. G-u}2D`I*r]D,jӋ5y^W'O%pk*~;g7݊]+}؎ GaS@D:iOs\mq|j6;.<~:]tG^OB@U,5*^a%j"c! VAeٷ[3Pr"&Qt f ^&THȔ箫 ǵѦΕIE@lU[^FY$$Qj`R`mlsʜj~?7,j&͜Y?6šhp%ݐuHz;ɒHŒ%l\,:եD魑YuD/p%P۔hTB5$Z$bkJ9RX.儀 Tkdlf؎4f=c[,ӽ wײ.*3^9/;ޟvy^?9:9~t@OFN$%$itV1EU*$Aa>Ӗ*PŞOEn̔6(\ԉKX .kDz86Ff܏a|V$澠vޱ/-lae7Y1n m?6Ոe;iĕhz¥E60B$c*YϩAFƹxPϮvCь(hG)8چ ( Xz絈** t ZgKESY1^} j4_ʎ>xON-YD 1v~P_‚ҟHQ(Gs$F=vFW@}PV*S:rBP Py2S} Z#g䃨ChT6%aXoKeOqh4ҼM+vH-j(*Z( 9]0T$ I ):CYƈ|e'IncoV$=3/rxGsE:ql(fk@ClrfIg9X: f W@' uJȉ5e9k5嬒?1uxQ}:"pd LpIN x%g4R"Q8oY@3Mp Ҍ(KZNO OzAd%@a3&..%I:/!LZQ1GuP5'N줙n\4Z"X@%b5֩"ὉXLZD:1mIOl'k=1#hf\^0m G&nЌ「FgQz*bcgfq܁r|Q«E[i=|wN^ (Y^^3?A ˏM`=bKG Sm7~~T}E]ͯUͦfEZcECzVgk)^^(w W'ˣ&_wK)9o=6n1?[:p&ҥHK*F宨d^FBA'<[A%.ƔY2@#Ĩ $Hバ @|NȌ$b\(h7'4ڈ'9Ek @qNI=#Z(ɂɧ-խ%s~ϟ/(-.U(ѵȞ$=T)CyA<) aULj4cn, Qs#% Ym\F"bAKBۗFΚ'BNRGRB OL'iI9I 9M Tڜ3 Y`_P|0;ڶ c$Gb%hNS H <q14$k5&(Xg]Tn,Wcdld,pue8yee.Ɛ$I4UGA'c[$c938Cw}eh+oUpu{\hu7@{G/-9b ge`Jvigv"9xug[NͲ,pq8UV#p @bTD)<ӚUPB1Ph brTJj)1;9vf|mg- 53\||'#vJ[): 1eڪZߎ%*9h,$[szDg";%(<$TX鍆l&F- BT\f T-X@cţ8^UÇͭ aiHɁ0;3[f|NDI|op"?wnmsD.-a MdAp2A{D9TԠG;AjW6nm/Dn3)׎^ lU!^C7?ks/a%u ;Claޣ=seQUZLqQO eqZg/̸wgӥySyJ`{}<:p,6K5=E%{Jv@qNߵq'Kozd삣m٤ypl3AȲZRN5Ö7#fY-aap(|87=;MVuNVYje. gsHHnX"m81jTdݥŰN zIUBWT'ՕN緇/^z}~׿:|!eৃ÷kpw) $v+mJ?+m55Ms-iti״%^5dnele/g_‹޸ޛmqU?yifƼd'H6I*~ZEE.Q%U$T*D# , ̻q_uB6E,u-K_lLÔ JTfe<6`T`Wpؓ$C: H@F( =]OH?S: I8Da &c'&K,0T(4SSD%Uݒ3;-Lr< C׃%ZEL)JE1tt^- T0KnMʶO|#V6ي6"杞^]Ā][iTW4zhC+k:l2BxQNFB޹2JI( HQzw"B1+鮉4k4-Тbn dX#ƴH\kJ=.h9h ῼ3&(^ HGQu}]RP@A9\)f`g87_gP)t*>A1ry5? můq+=^^՗M}=7σ%%~VmhO՘v署C-@z@k ?v Z zWquzV}9b0 r_}‹Sb4Er(Np~#KJ #5I4oomx(K08~H9wFƆTkm ~6㵌4ա]ވ.W]6/M=2vf3_ GQ1}Z ͮ+>7EË\ea/q*]B2Ddܗ j[S^Vh^ ^/r4+OV5sfⲺEtӌBG֛%m9Fh?;+p|nχc2ץ.J ٻ6$W=,<# ӻAczh4򈔵E )٭")QSI 0,JFUE|Ef㿏Vt}jx`a?aGYਞn"kF 80 R1Rd?o5AXWj] \UV+LZ:mcPTIK[xI`E!W2pC|~/J@x]_߱E|~;c1`xu>E5:[@+g-}f_o'Sjų9u$r>ԺiBR!ͺi EΤx UK9 ؍)%HC![ $H$k!A AL*KJr[NeSQr=ZLLgaTd"M9)Y;b$C3q6eq@fEYZSE.?Mɧە鞟~j% suLox"Lugʘ$]Tٙ(Pm`I>tAeX xƓy3 Ϸ[A"`?:# =h!-oV=0zTDQbQ*Nއ<^z,_>|؉z[;=]w[L糧t=j:/㫫>ط_^5Ѭo?фʈ>r `iKBALxR&'dz4}٬^MgKd0g%z(|V+tvEv`{;w>IjnknMr美كqyaռ^h^b^9$nS*_?sah/4ʺ0ȱ`i}Xrgܥj =cTp =RSL*oBtBP"eU905hcʻgW9# &>UJ uY0mW?I~('},fN7^)lȻ0A(|r2ݻ}SLe?e=kÂ*LJ^]YX>+Rb Wo_,Νo==};?̉+SV}&+IB·>,|X~Xxv_xR+}=~_1Of]Dn/._xQ'WZlfr9[O'E2>nyr愵v|3zJ쭖u]i]1ӤӣKKv&m'U] Qv5-3ae"wӻE[o^~͝/)P1 %O)ʾ@J)»R 2(%6ZӓMe HLHʢ]q b6I<""cdjE'Ti}LNcWz|dk{qQb3qck/JFxDS'~{ADKw=oL!&@NjGt&H#c2: }S-*wjsKpbjgzt=S}>9 g (sʝ>ҩ,Hj=Aَk;==E$K{Yr@4Y%kAd]ʤzk +Ueϡ=`c${;^D{5Ghݓ0V䋽:3OVթi / 9e:wpQx@QxQ1H}ipRKXE'6dp$AH5!VXY:@dME#'Q d9{W@ U&Zm|fL-B!彻9Rb _Rh?'+,.Q!d C &kMB " \mCy|Rrď3 ̼Ňֵ[EfZƣtTPRZm"$Ѫ˛.WEXWR} ts$Z~EuSEe2;P1Fڊ169Ė)EV[*O:dgI)C641+y`!G3qI{JAK9KҐ֪H,ƃbc(A9q` 03& (8d|ICn6BcH0&c"JF,?{֍!_b| _EfovMekYr$9.~H%[l[sĉu(r3p8#gG:gh\ՌQM>c'c$cȲ,L>BcXD^%1N)fr*"HF1qt2G26X<; Y3=}28J p.Vtܬt+{QD6lã~X_cFzHt0b0ʭ2%1L8{`YŢGBO.&Q<|F];WmZ:)82"hKڽrx]1SM<[Y<1W7Mw};.߽~oI|E+0H`- $^Oƛ--l3isֳf\׌{E|԰ŸAۏ Z ~:0}ߟ4gO9tЙMX< ?L4 Mb~|OzqEM_WR‚x- X~SNnm5vŝ+Gߟ.YJjsvNf8?yW'P]]wƾgZX̩Ĭ%(kf~'ǥVwD?r:)I"Y#H B%Zx#rdZZǵJQޑ\բ>< 70g{\)a6-X }< g4vPBMpz~/SZ;%NY,h#i]9zX^^-c.j7V`i?pYQgM) "T K,PT YQPE[NW].k"Z!gUuhQ D7OeYv1EJ#rdG{e@2184I1[*pC, .ub6= w>H7 ?>7vE~?a6zo"}wU{ͻmx2}/p{x8M5=܃g>ZNg Ś:]FۨRR)1˼{,{pxR꼝~JˏߏZ-E{QiͅS2J%2 Ky%rky fA I胢@"u:k ַmT =[ȋe\ӳ KzZ_ΞR0Q&&}gz>N% {er載AgzGLԳCt_Q.hÚsG#Vm\AqURPwr>J0|NQ{fߵӵ۟WgvFFfl1 OFl;pG֑Vpq8SdvmM2'N=PF=O֨ef{EFn7%KxRq 2 DH% `f`QDfLM)b4w48!EEfd洷pw&rVZg_};J)_c{X+zKL2L[SR"|HZQ|TJ9cQE/Q--Jyu|[ߋ~qD5-L_bivPTgV?1׽W8LNpMf:ozoKrvK¶,k+֖ml+)=W>aT'C!k!Z 'P~h ;!bS߻nj hI"ry16>RL*hAbVFeZa1&ME)1J)U`md)iW ?G[lY=trV~Y3.2#ӓIQ\\(JV8 W J$Q8;&wFe B0k`,U'I/Dr&cFTAے}(]*<`$/mڭY4%*12r %P:p"Ah&5),0Iu@ɼ-Ykl)gM/QF0kS9 ֲgfJe93Hd\Ja +C :hS><m[;#!+$^ )* pBT&Q(㜐)p`j j頳:;k'WpHxH+FA`ӉhHY{ BgqUxJ].#<"Kb+\0)£Giy8iXBS$HMYV="D[ɲ]]EOEZc}-ÑEk W"øCFl2eiS6-wx' 0>Q:+/ }>)5ۑԤ,E>5(ax.nGv 47/&r?xI.8`0T-$;] keXAGubξdU1sr#2B8&H6c "dJ<𜳍2"0C]4f[sR).T>I &2p!kU D6fT&hϵFxnu"Cͭ)mx5!<"c{Z'wehȡk7f+\;#4`Їd4yX~]߷jRW4҉>j `)\Ad ** %B  =WmgJO>G F9„RAe:G\$9sY& qd9֖9uivI{AG^iCS\]Cfy^-=ߗ̾ew AA=*B.LBOKľ)T*Օ˿XMa?נy8!`v 2UKyOQYXRngv>mW]7Xmn|B9@FRu"wVE2d-YNF+nV'hv琞h9/ؐwR'dڮaյ=C*,Y ~nSIҮ]|>Krt6xc2R 1 !W2fEb(k]* mNCX䠣s7k!ˈ ?,Gi#3S%3ٸaFiדva׭r*>=nl}t w EHۈVē cƙ5"20d&ie44:iܣoёky6 scm=ZzYZ#f g=9.q@ų'+_]UMUY)2-β܍eyk؂sBqY#U 2!*uNXّ_yG֑v+O[ff27\E oTAa2bQ9eL G+4R]vsՒ| as2ɼf vjjK$)dz-DbRƌ@ Cっm4޵d׿B̗VCvf M;;YSfLfiآ% Y,Uݪu=c(ed+D8 "]>` !@c$5MFɈDq&"8AH{P .*,As> GU{2gû&ץg;PpvJ馁X쑁SI"e` 2b1YY1V-s&*_ȓ <]@O+\!O^/Gb94DYB$cǥ(=z._@,8˳Ix/u65ZIi|˵()>;~^;#`8R,s%j|9jp4fc 'ب$qhi8{O+9({uID]_RQ;7jϋdH_bVR]aV;To^U]ORT߀i>nUEӔGsSQꣵߒ٫ANnM*aTWד=IՆlT6$7VaHuZT-օi׾5'aIVDx/;՚jG:Ov嶭rӭ\lKAʖzQ&k3sqo'gG)=bl)k7aڮĻG 6.0ÑsnD>#nE}Ml$WZ>ىe: ۼzD (}vԹ]t=]*Ou@Mp¿eR{;@e(-ަƂóY]E#i޺y.B[p 9v7)뫻 O'n#),̤%7Fs5\'6|YF#"&e*"\\yeszvfp˻QT_wfMe&\&I3uK# ,&h[/8ֿ1S|]Ut4fϺ޳Ǫgy(g_F>~Hԛ#|۫{ =?\St"1@d -i7'Hqz(զDǦYJSQ ˇrF["X<n^G"w=xυrTa"Z@nry"[&S >x?s :FAqPvf7Suh/(Ttq$y &jo#/DD}e2HW3W)JS@BtQ33=?IR2 2sAdT \,.hl(x&xbٴ'/ӽO3y`ia{ߺc{3wx' *)*m4a*eVkI&f.pxo/a079qˑf;p-/vo/1L84wAF#ċ{%jtiFT]GHq3E`M .\htsѐۙF? fTjO0 zTnmGQ d_ΕDs{F~ X oW+m& !St9&Zؙ*K}!WEƌuiCw=+/[<8Ykwً|_|gw{@ǧ7tGRFF-iK:Jp+1dpjicb΀!UR*AT(oh6іV0&02cʐ٪6l'[m-0]{֫_c*wb[LUpݻ)e?m.ܻm5 d8`LӭTn-.",.^|_YKg탦Cye'TяPD+|H{;|cHwci8WDa6Uv z;T&W}+QzJst'Cm5X"TƧpWs7>[{2Kӥdqvw[J>&,7n S>EZv'LEʍŗJ6.o>%ݶQiTjR|qCpXLV11 o7+;`:/Fo+VV|oθRk^w>qV_@3>-.#;HM&`2Ngq U:qڎ=N8e!Ks NQN(:gǡ(|7֙՚W50X*3FY,G]™depaì3}f +;LHĵIkA-%V),Q>`R4P3/4Ćdq.I)S>j8ђ(Zz^b!p#HЀ1d2n.sm+9k-"Va \ @#K &dv((x1DO3$AqSx4:MX#H٧B{HKg/&xth|-S_evvH&,W50D%aQ/@ݧ܂F,#'oIiU$Bb|NiNahhu%>up `e9S:$Qx&02WlRF)q xQZj7@m!v@1l W$PRE!))XN,k==έrIcth׭/Ef̺HNcDBmU}a/ -m _ݑJ YA׮z|ou/ - nCja$[oP\ڍҧMhJ)h+J2c*$OpHv9z/3i9yjN$\ 2ӪAU k2P5hZ |Y%P Vjx$ۑ ox`Lt#+ѠKP-bYьt5YK1etSw"}F_NO. O&!z*4}4 ^{@#BK= R}R{ЋIuD!%:.h 9t<gmEe`w4K r[O րhg]+@r]Z *-f]"m!I9MHO,l,4^ 3@Ф xd]\EQ:LMF"h4(=(D;䍃"2J, Ұ" nj D?Ft16ʉ6łkJ!eќ8Mh% Pf%BJMKKKoU4^P^ZF7mU(@6 ~+~M-)K- ڪԆmryw^PKuxdh\]u=@$YګY5 Dz4qu*ip$tiѣJ$vGk'QT>?mCh5(U/ hVk֫DYU9tw+@(|t* Àe+M#g@ BX q4S3AP8QN=ZMW:!;A,iJ M\$*Ek JCzuTS.XKC1ЗC1 ՆL̹|$Ct ]Xz꤀%a $diME!m#uygjp N}3TDN(E=^<]m&d1@ ?B7Gw8=ᇭ'g9C xHFZs4eR_cƽtf_y[g={og;|K<:$'8B:N D鹦 PD@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N/ n!9? ' `@9QJN/ C@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N/ d ; ' DFw(N #h@@F*(Zv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';\'BAF}0N \ (ed'З:@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N߽zk[Քz^߶7/7|]Zže &9vpKFmVaҗ`\Ch4㳤p=h]w(%• Gu8pEju(pEpEQ:p%•3Jogz\yr~2 }.-9_}t.]YoMޝ{ م_~J=x`J C gI x ?w4;OY@ٟzI֚ϋ^ỿǟfhe[g?}};8\[U)yڜ()"j"52m =dWZ})970)M ō\<>sc ؾe6X7y qq}Y|+\#$WkG[7 џUOn=&MA'][GApRuwlQT+h(CH:S5lH[kO'^TOnGO":HyXj!ׅ9Q7'ۍj8N8 5x\kʛ a )hqL9忾yq2^njK|Cx{vY6pqM͔¾6`[q )(5 NR n/+2M ͆Ԕm r4ŗ(O¡QG}9=պe6_lXk*ys֮~>mVޞMOR10#[HC0d,֋.z9Qfu]ǃ8ߞV﯒s˛_ct;?]va.+?A]z_z8#ku*ߦ0{u|(]o,dCF8E xV:/cu7>w&г粬, C@ڮjxίՖ/v|Gz&7:~9rX/nvv;~Z_kz61]翞޸]hundx ,ho4NwmSTOd|>\g}6 //ܙq/7 g hƎ=[_]KnA8JByW29bryT5]COi ؐ&T@*Ju%%RmܴMQ jBr4Y8#` { En'xc{>m& Xq7?|9\Ni^ޯlF چ Yèu.z bT6aVrAVYeWDNu)]$a,5?{zŏxy">n\SE&o+51jAz)JSLz^[Qo,l{A&卫ܵ6NQSwB8Y8Ƒ/ {(}3zAQ}%]J*ŧJ\}ȁZKs![߳sUtӳj4 {a[ Tt"SG#l؉W&(;6T?Pϗhiy$q9ï>jV tw+ofgmy'/*_c.nAH}jܿo{Dz6_ufpPjQRKo)ސ{p2Si*;UMLqII+hǛo"d'D¹*ad=X52>#{zKxǴqq'w_v,wU t(ӓlƅLdUAkb K+3&¤,* CV^vi4TDnV`JMZ}L*VeE]L,nNؓs?byݒړ}AmŨ NusEeֶ^T͑loEAd͎ `:,r8v|*9Xi ╺y`Mm8PP#0 /\SfJd܏_-D0 <Ĉ܊S9BۄPӹJ"UT̲ڦ.+M;ikڭhSf3!"Nvnx8k:'%E;1.EŻzUut^m-P#lP!S*kZ (6ߒ 4}zcMqx .\r5OylA,_^bfY|.A (]ǻ?Ge~Sɰ:`;4F{!fcEBjȤƵ5ir;=bcRnl^!gRtPmH5%B>N֩UvFNK'8dJ=U8(}(5.Vm~ǀfYyCy)|k' 3;p6Z-{g;ul櫟j->SCg1}"+wűggrvD˲jnE#XW/gQ9:X1AJ>=>yǟ~Oqpխc?Gp !?e*}:Fw=o꓏_99F?^]prDwwe)_Tmڥ|xo.D?p|wMOS%aO'S{u?*~~Ky}B9sRR'|xR?C]N{â>ua=A:Ay/'QRzJ}^hjzfª+ݓS)>cnq%$wPК4tt:&%[\j3!5)]i7 }q{cd+%مu xLvȽ/SL& %${T) #tDJ@>W4ȚeW}OqudžF}!NO*wPR\_!~emF)uY'n*!eZmo9JTI)A' tזGd#w5GO#XM2y뒝 Ec$)Yd2qtOMcA&JH9O$zh墐MCe0QQ -C6:ҺEzq6Y8gw0V,QwmI OlFGKqH9-pXCE2|._ (idR0%4U]U9diH8m"\k&3o*e)[PL܂4#R ^ DM|b1+nYS*(^|Y ==y\1GuP5NOwN4Z,"b5dUI M Ƃ"#D!$i5D:cѾd ȼjn#'7w#,"rd͌F*qi8ш9jGHKI-"Hl젂7[c}-àFQZ Q&;sQ5Ŵ#*iV\Dըuўx,^nC(rh _t6iHY)l Ɵ>tȞsCB#87d0|yK.)L2p&ҥHK Kq$̋Hb7.)@%sI{䒢;51)#wF6ܒ  Z+s<:3ޒӒqp|usJ<.Nzc**@qNA=#`A Ӗqjkl1{re#Cmm5]lJN4xp 7t䑬$z$L3N<9*EmDd$pTଶq. 0e(/ m~k-#nBNRGRBTt1jBe4Q!(AͷyF5wZ?lb+m*ppDHPy*ir|NfZ%k=&Έ;h|\UQN.}H$Ss.1,K+p1/ڨ#GYT'c{$cshiv!k_q: }Vm0u\>9Ўz9t.l$K{VT l8+L'ţFߓVM[ˊ]gџqg%WaT (DiM]Fs 1 Z*cT1Q!r̞ 3gYy?txzfZhgK}'I;-єpBL 1iB0ʱDW8Q$G>XCϔL)jURE ojbѢ DEKF Zg Xہ4bia?lR)OҐSREIxܴ$3DI|J";3辙A!(J"@iY)J;r~tFH)5mNB6W\![;C:\Ϫ|BC?casoCE(QU'G 7CmlaG}.68-LI1qĵ6ͭ`\(dd}8[a&?7.͛PCEC.o#}qMINT:SQB9a#rx$C; ׿J8/Uk@J9ߦ6~c8\?gW;s'K(U ŕKulf?+sՋfӳ&1@ Y~Yn#b(BF:V;_bhqYk[q{Kafm30Zp(|8_6=YٻM7uN2զZ]WX:y?c#!ag\R-G:1UR<񮮬7_7:Cv_ޞzW''?z{B>y˓]i1%A>E}h_ziaU66⃠,ڭڞ_쬕quA7f':-  b~Q%M ?ר9eAA<  ʧq츒w`tK/4}Dz-rRAIҼ6' }1?Yw0m?k+ ea߭5֏p#RnsaN$1 ""`":&yb" *D5:Id\*E蔺mi=<}C+P.K(ɩ"HyaS40\t&&2vPBUx I_𐣨/kU [*'I ={:!Qƨs&r<x(Q?qrΕEV'4J@u>ŬnӬhG^?woע|tzȰ#ƴH\kJ=AМMET7 4BBTdvKjj((WA?,.UpmՂb%`Cdz8(X.qBEq8:߽zn Ï5%~QohOՄvZC@z@[BmRRitBlW|3wMSUb_FGnF 6xLQ. ǀD;MA>yTqk>%VZշWTs0kXWO3ӣ+bی&U+P Ż?^/ ŷaVwLjUM;,bEQUY&jmI\6Y#f[IV9Oe -U)Hä! nA28m+3g=砧7iY?ۘk6kםf'y&7f֝%⬍lJXFLk^%tֱ$tW;Ԙ{cBqCz+£{־O'@3;7;FFĨ2Rdh>\8IB81 L$FksIʁZ)hQUho /(k9sa# ٳcZC`w:6`n;+cZBI̓zG1@{q>u(LRM}A⠾a"yqCG"_>` ''7\&q|@?OV>1qTI'g|S2I8R[)\9rs,WPA>u o#n]茥m:/zw/[o /Ƃ1|:Sv~oGշ_TƂx7hh&y[@n6L(^ff͉bήegZȍp6^:?r9 &.dٺF7G1w1T)5?-frt|sznmܰ$yB8%8d\*yRD@".FrK)7RZ)t) G"h Z%dܑc څOcrpFwѸaNnW-vA跍U'wJ;eV?'U$P`5kD5 iCQJ8HhE0 ^vܼ"9Pˆ6r:c5EL)bHnQ'e n%Vqκ] xlK)v^4iZ*k.Gg%<5[#i K(]šyD0M:=k#Hg;ƸJ#?a3f]t$#vQ8sȈˆ ]]%/(A2ڥGґVpe%  x9Cxaو3q ֆ fVRkAv^uNS5ל'gu[[J)X<:d%j јq$*R$YX&}F#迿(7oBsL[IqWnGmv m1Q)YX2.ްC=[o'ٻ T(~1N&fyK7fF75(]usV#lrU6gha0՗ f5ݨ"]:H~߳bĜWYuTp[ M}mYh2wwnRtVr/^b^?rGVSwpmpbS.k˃&"ngߎc>7)=Q6u?eRk5eP'jK[|&?izv=4Irj-A ϡL7^t _.:Ngڤ+``7b8EϙDF\V)*;j W~2@oNOƹ_pi墦d1 Q`1xS|yP4Qxp(|@>8_Wwh9rlSn1^]E?}Z7ߎYuqsvr'yr`옋!/C~$nU[^MvNSJ͂|gK$8 T*w4HR$Nd-g{0Z˲;9) Dh,@X2 Z0 9!u1V"70+o7PM2s8'\X(|# [ڝWw>y+8 Bg- Q1z8,}!&&'PRTp5{lqηx{ 6T0abkZ+ b$e"c>R$J3AN׭8}&ncw}mm}`;`Q=o(ᒴ6ݹGbKݨIj6~ 6%#uTX 3j Z#Y\sBvI^{0TSfLX+V)pﴡiާ:4P21C7d} {KʐE)RYLٳBV> 0},O(N^HHޱxvHNӚH@>>+ Z YxJ$AT$@&wIo 4$&*/GM(sUJݎ ɥRlz7ݩ(Qz@b9#}?,C}נM.s #yOW~xSe\() P % p6NFQCDK?V3"(Ǩ$ȨN1&'$bL0Rd="DKUQ*KR:W),P,d'c(oN-33Xl/6ˎSwbaUqon4 e"F3idsg M%,hp(D7iϔ$1`EAgȔ6(JDlsblb‚C"Aˆ]L݈'b jSAm֣v`AfOR`R޿%-+v'DNx.x#PV6FсF@BdB/8~A5hW]n1uvaԯ6"30 -e""/Gq N9o ˭Ed%c*Q]L<Nxۇ|XZUk7,#maUhղ{mV`Uժa|z6;*,Kb7ՖW;ǬoZcnr=;A\UKzW5œr8Jms6%BR k!IMЬy+->=HӳɥgJf' R8I8oUDQ|Hmg I#xD~rFjr3`5`UqQ8!1&F9ᔕ@TK_սT2/ :ĀMA 0 z.lTP^ˤdMxC:@B_H5 AKV8g~ǔܞl{ǮOZA)"S.ĺ}hL&9Wڲ"YB/bnO{UXVSҽ|Pʦa6ձZqX}J bIuAq6(Jrc lbDh-& p hR$|2+45,G}JD73_ 5ȊL~{Bkފfr= ]%-[<~g^3_/(O7iIÂ1Ѡ"$3g1to21:w81 #Q'w.s%POl[dVip[ WA $Oz>He& _]koF+DA)" 8hMphy aeI%QY,YeA`[jwv93pvFkBZrAk'vRxUHb bă1 1c]vsN SGfhQł֌☆||(ٞ㉽={A!x7XDm `8R fhRB,61P侕="@|>hEMv 5,p-"-u *uI,F HVEc UctP1-:AӴ:3?Ml@ۃR BQ mI&|`Ap!*FzC!)Dfu1+%0'O<($E!)5!O` ϵKȥQD;V B&د6!I%iPj*uJ<:8v)&8 7S#g€U(x4軫k SI6)k(r(BcF^P$$R9K+N!Qki$6G) y F"ZFe)A@h7FΎgnBVRGR|2§5! k%JL+$s8wz?l8.,O!Th=1d#2\*밊a,>L'_TKQ25&r)@ErU0F>VI##d:oFl. _yƦP%*>_|+tQU]U%)3c\~sbT .W&*ɓlmFUl܌p^yUq>;._g7IzW|a>q?=+W^s Γ"ria!T7#1}aH0\sUށ0: V0bGb1mvTnu1ɺQk*Q)^XHH<| +f#~;8y6XѩFNegay;s޿y{yӫ{~w=LTqݫa,8:$YC=fM -2lrՂ2nr55h9narg-ғӏCSZ](N= 3_|1(3(y*N,rP! ~0/2ɎHXb:>sP»4ٚG1fSt>}k ޫ{1]d~V;GJ|_묟ٛh(5oۨTK<$ҁi@D`pR# R) $Jc}k}nB?`֠?0ddF ~R,"*XM { VHK%T#/*}AcyX M[cR;ĬWpJddrVk/;˅[,3VZ3oL6 h4ߟ|ؐ.0Pvy %~ ܇8da =4B}lGk+Ѿ? ,}Cn2pcr/3*VS/VZْKf9۲N[ik>( i埼&MV0#)6gF\TDcQ_Ho}y(EYwٮ]5L0u}m - HD5_Ws~oU:7H8*גKLjcV]tЩU7ۨM>]V-U׽1B>}2.LdzEt 㨪Y$^n[>`'k F Ajdɂ7c)A4('Z) 6/ZETP U ߊb;0cD% {kA2ReBAB "o*`"{&CDfL`{I rg딤ƑCcܭVl wp7E?Nɻ2@w{+t$ݼI ?g1OG&}_ⱊ`si)aJ(^ i >HlaFiF&3ߪZjMЀ,woS+PA17;SJbm ! nPkt'uO7񵎑5/B 0qQGI'  ш 9l#j1?s$􅰮C4{-@hD6PIF @OG<̾r5.q+ h+ΕVX66!FTJOFsV9,0ASN&>n%7X"wRy%^p؅F1xϪqp1he+C"7Hfgf諤^X]g#UTɨn^};t\v&W@.h<.šv]G.;8qvk]-oMdtBA:ʄ:a2=FNA$=-CkUӥri-57E׻Vb,J nL-N}|p=2w4aqHY^#ݹ▱?'"3@}gx4e}v{<f]Uݶ-e6b$;N'*_هwP *wSf 30)u{a§25z=`hrz3w%Ϯ(vW S^]|WӬ^ ִNHKM؟MAPz1!!mg lgv44GCGt@E6=N~d4 .̺fn !i) aPPq}G]h䟓Sh g0TGev|y&2\E֛>{<[Oqέ9z{k~E$1#.RqDVt#-'ًy0[&bדl.ƃo^4J|^n}8Yy6R1.eJqA^9[]/2jrף6][ۮGj'dư QoԻ׎ﳗpxU$f8~]m/W1=$oxUYwR-''yW.|{;IU|>on4[f #PyʸxݽC>OcsR]r#7F8%2;Wݚ֥-D %*nIKypHd:;š{_ g;оUay`ᮞ~ut~kv AJY&Rd(9SV9 T4hRpŽg-b `V0 1{}v a(5lwxy^MOaL"1:ӁC/B|NY˜K(_=69v/{ڗv%*x1@_D^8*XO&˜,$-]-tUZi-A)9uB`1:2HgB`H UZJ7JB !=8&೒+g أ/'^? NgO@`Dm0M q,m+Z#Ftnj#J--gԶ? x%^vuKQ.BD7w ko i ;&>$8U:&+LR ;Up Sf.Ej,*YtB+Sכ{Z+hxD9j?σ* w7J׶ ?E{Lht'^7옿@A) M]1~\r!*B 1&D7 ΃tvѰ/,'>@Hޤ(VuZ7Y=P  x(J[RZҴ?dЃ٠XT!'&'Id\?<{qeχOw@~F/J :R*u \S$2 N-vzQ`=k&N3e,[ji MHJSkFEqNeٓTK&ss>}⳶!{ˢu$:eq-nzխ BvO6fk8 c9Ұ0fAhIx픠dWT.LvR2mxBo8zWBdZzTvccnqWg?>?;5s=>rzMLGSdЄ#JcFѤUV%ђCO(7Vqt±avb)+SbLL3k^_7qv~I<.|ѝNT2) D圾~VuKe~tT>)q5!GP]i:`Y|*Yl>_~q͝]|?W5?kQ-Х R?٩׿*# XxtõdƢ1t`L  z<>77͖yϞE𻺷bmb_-L#RlHa)D49nߔCy;nD\[7nf|SCrENeo)y.l7)󔻌BM;Myu]"-c|W3sւz,Wt&ߘN \:tΧվ]G)9l&p%%hz WQ7ȡgvnJWUFhCrhDWo4JiǤ0s WF]UdNW]A2F) v42᝱UE3UQj5+Br֌xŶVtUhJ1x(aHWusȠg''g!i-/e~v~GT%XSf꾛ӣ߿-*:P79ݺn`/Y|'J:,_ϵg|[/{?; q>3w}ǻ3|8`)CZM"8| *]S3 yqPoEwaG42zCm֞W߻e?Ϊ_eVm0F_xZ^gf'?@Kϸ_Ғ =⇃/ X_?_eYqvZy=|*{fayp$G$լbD 5jvCj%MR-J5h1"x.l3VrtUQ^]ц&g+nی{.ͰZSv(芶+j[/D #+%*\cOW*Lt"tFo ]U BW*l؉"]9ZppŌYH| XM`,ʵ7w-`xŌi=~NLG)]x Tf5ۯU{ˡ]ջ@mXNҠ"3b霷)AVgRztۥENm^G־jSccƜ4>h I4RBi>\ +kCN,ELW %#lbXINI)64n@ nҐzO{]ORw¾|sPZj>wdMCLhmӷҬMQw̮o78zEm"j u\9iZݸ)>}=@Q1vVG٫.(NIMhf>#dDy*  ㋰1SJ5GAĔXy%j5_\"5V`F"$H2.*('4G<.9ͯB~_]9Sj_9_w̽{/>yiMs/Z+vť2 1Zu"[m49Ly4AfG:cOdR {#ʹGN͞Z4o.:whY #A`S0:%DRW;wGƓ^M=?BR_.2!4jvwj6 }l%Ii$XEID$VF'+=d0dSH9.` VU!EY HU%xPX66lƒ=Ey[+j~|j&~\?^#Hc$ymWY^5llT1I,ALC: Rl(t(u+I(a(.f]⿵!ʘ1(EiN1z587,X.||DS)sy=|/_rvƚ lY 9Efu8/Gui~:*zM~K.{J5>45A_fp`W-LŒ%l\V`K? x!C. R 2TR*!ZPXX$AM16ZM-å4K& N4f+BK.։ ձͿTefsdإ/{".*/ٷ㣋+g(3S "V)iE3"asQTEe,gjYeUAjpf0`"p82 8,ED;s(:0`)Ee#Au.Rpw.uVD`۹A E0 h# MhC>%rP"GQebIYDRF‘9!D:/7"+1FǶ]Q5fD51Ĉ)O뙄Ҟ%bQ0G%L:R1^ڦ9+0gcCCR%B(gB b6HRȑ4 )Ocט c[/s iʋ1/ċ/>dd6^98r[m$И5 "){`B/>/YǮ|hٍ®|ϑb%/r`ϞmE[F'"&A.( }e?T3&;9chYfNERuvj(byv|ET}N])b:PHЅxNN)=ҳarp0lDzvR:hPq%D&홤2H~jE46/\fyd!I]6;Sdw!GѶƣ7Nɨ:ogz% ^R-4| 1Ws3]}|< ͺи/ۓ''/| o QU3 QX\ӧ[DWODN}' ɱ6?pX@L('rCoTcPai}aYK)ͦք.#dH& Ā^CFTȢ)+*JAD2d6hJ06+]68!2:)) ؐb\fp68G:"([T;05QxNy|֕W_ٗOE+wrMxB?U Յ6wEYFTG!AJ'xG QjFWEsye!y04U FQD1(Yz|JZ7ϋϊLxi"{^"CyQ<dCb=[n]Z(t-<5&d%S ?_z!C_—6q·MDu O"2!qp8)Ȃ0^zV-]k%Qo{RQ"+Eg_tY!2sR,&Lj'MqҕJX$yh*.l]n-X<غڲn۲XMlV(@J ¨F$0!̻"R`B2qh9-.%*5>*hx|?#%ǶQIdz-L4BQ+eȉ!d bm=zDXsjj-W~{S%oʷzxfdd= Xƒeh' (Bp hԹh1bE"A=S ۫bajXߢb^7&ݹ.ܤi4딗Wa{I $j: a6y`FlFuUu>LYuxyO!))=&I5<[dI˄!).$($scP"[`"T|OA#:~{R$Tt50y2eɃt(L*Nk$tKs䵀ѺaBJ *ާ~_N 9@JdͤA(#dbQkK.b00fW &ΎoCASGS2AF&f('z*.Jj`Fopr+m(<4kڅPgW,Z`cXBX UX#) T%y f !z{헔֫va4HAIH">cc XTrfr k~I1 :G:vN?K7dv mUg|!w ^'yքеpkOοTTU0nѪhJETԐ֛|OZ}/Gv[FwsuS:ih*otMJ[Yv> ':% YEz* :{d>(AjSs ϱY݇acc%fO@)^ЩD|dUV@xE=^ԛ|I#`u=IP ,76yJIR`E m`k}V~s:)-+'Y}8]HK4>()(:D"` ZSaܠ5 mPYTꔒ@0V]3d͓"H ^ T y5@&?k_wυ  (^'ylQ3O٧':'%?,tRUuDza.mJ}*7cy-i$2)Z3qpNkA4NlOhյS7Gճ-~t{F'd>ȦG:颊/{g!Sє>2: )YV.k>j#M4\LiF<7sszpK(ծGhzl^݌kO.ǫ]g1ҋY> 2/gV˙(6ѻ0L턦!FҮs$ck=\]*fZ3XgzcONjm7sT%ɮQ;!ͨZMR|>;O9[2Cy GeV SkQro?w> ~xw~u3A"Q/_1kho=F]&|q}1-aP#6㶄v6;k@Zrix|7ΊhԗN{#AaGytYG{<UT!/inM2x X}Ufǵ[9u,qOLE6xq)lL5RFRԧvZ~`'C׋?g "'m et:,?KI~ˈ(KQ9] . J㘦+N=wdmj " NSKS"#dC 9RY˞q4%^/mMy6".ozxuV/oe@=jP8Myi~lQWycL2lR8H5FxIX@C`k4!qK$.աE=>iyy\PH}21(e&fLT"1A}vɤ l(Dk_\R]bvivLw1ɭA4xѧcܮgjht<ɭջ{z2}v*]Gy8#<]7/[>W{\OC݀͂Ŗ:ݱ.Ie+ A*ƈWt:itRjZN'Z陸V)N'O'5P#^Gh>gHE4Zؘ]C.އ26hߝ i^R} GLO&篌#^lbBt#Ӌ _>\uN1ҍ.4lCFK&3':}ϋ5Zwǿ=:](DNZheːEmT̵ `4hA@H* ~+xQ rF!'94*JɶEׁa$ћ8;-mه]M_WrqJ@g_1ݽ^CI9_ :m }Vfz lY~/n*VtBmIs*u,Mb;24ܺͲy:{:ݴwO{Ќ={,w0E]*}7 ש~xᔳqM?=|еyج`= 2$Yi%!$[.D.8kh"Q$BiXɽ'zUb hr6@Edk`A%ʲ"iDD QZC9iUu IĽsoP* q\x2W)vFj:ԴsfW03Nfz u72pח,Fd|$٢*Ѽ 6f2}TFɻF!0smbV ۢ[}p~'P· (&;럒%aR^L0*EJ);pi ehN[+6$eQPb+U#b**F&]Хo4כ8Vh`x`[z͜Wo|:w(7,xodzlݛ}^(P<`>:qYx4h1%뛣|z@CAdÛGiL6(b2*ဆT"sא1)˔:$}6XKx!#蒇~n7ޓ5_"Ly˟կ!f\/i-?@zjޡ^RkHNj5Oc8Ne$,4YKCB Z1ว9w[ x{3$`d ȨZ\1&Ѧ`t,NE x(ȣ o% ^~lk-Q0~-󦖆xnb:_ YYyY/m1@~tC:-f|q>4;DIQ΍ |,H!wHx5h K6.fȦDeP )E! !o:qೆ?}6lfz._MX/?uqN/7VX 7`LiD^$hI$ ` :7b$yiK#/% iX&\S!kY1QKO$_4anIs+Y,“XWx  ÷b(4`TH*$#s IBCDHQN! iqw[awH?ɧ Ύjwqvp5vKTxmDT7Wk62yUb#U@ e6T 3;_t5ˣIԊ 9ZʌrdV*P+nE큤f<}$0F~e8_rzw0%H4  dE(dq"Xxiic}Pe-:tvWp`> ~w'ԇ jt :q S[+k?{Ƒl@ߏ v:ۻѯ8\>$-(qꞪ5V2iQa]2*FK笱{۞vM!vu3ؗJܔX,pJ]HUH|8U&8p-IR$1IEP$fuv ${6?a\)ne$eȦ-k"Ks*aVQJkJ06A.C9qE1^T41+YA,!VG!,'G+at"HdauZDJ,7dd\}}k-W|Mි4V"X&V;?bܺp&\N^mr5\"OImbE$%bS17`]'zGY3)ӖZ"fР\}q / =/]MI>Oi!DC b[Ezr٦!e!$,3&/Q'̘'{J3L%;\CEAwf4kݻEr5uE̋m0 m${gsT4oc\lfL>gf>]4~R|݌o?[P},}\ v54`Y׽_4*0@7GGCphyVzg?`qqT =(8?8imM>GIH-4iA7aZ= @^zev) Gb!c8zGy fg3pImơx)d.zYg8<9g,֫[R"Pp8Y:y-W;paL۬'+̘ӓY0Xq1q* mbģ?i8 9yT Yd~5i%-)% -OLY&:x~l~wgέfml;2abX4s? ɖuzdnS7@m-tn>IjnYeݪ-.A;m$FfJE"l;F%ooYJ|Mw4Dg#!sRulJ'KE,,KJ_rH3/MNN',I}IeJtgMq \POA/@C2 o/C0BSOy{4ltOl׶3X۵E޵]}#gI~s9NqǪ˸ݒ,aO@6‰ٿKIIvWp8 NJͩMa 8H?A Bf&of% E<=USX+Gnu#ý%H~}%[9_+.a ~4fngi^Sgr%1"5O {x(m0 U4]'ho[OpA^ OP,T†f_E֜}O\b~*_0QY}I |A쏗kӉSY3hkYuGXcpy5|+d\>g\o.m؛>~tuSREU.KmsUp)!}btXM}NwO. &chP_\Xيza`&.z)QaEF&A~>MMc ߩuwۢw/h/9v% $g\Kr1 cA^, +à z#(wRId79‡w^T{UUׯ^cxdiߺG;zq;r!Ĕ(=-q`iʵŊ"GoXO?OgP,̂Ahk˜)Q Єy~l,ؖ7ҥF) wLXOTyVZ ~JhMfHn!OX(X.w194ta;CVKn߮eh J2{,QI-XF H2%"6I;`1/F10ʌ ֱlK*RA 8U؃;Sf2x\UӲeW7TkYS5aVdMzMģ;XGb4RR3U%e9AT5Qh| fw4p\ avl _7#eŨs6p˃+5Vk^RJx0C Txu6iRz4A1۫cy h:ow zVUZMwWo^+fCp,(J]ئXuK *{D "D[XtSགྷeL,Qx/pP,ÁH0 C(@xFuw*`J*Ly#Wkk/9)xY,-,xGI e\s=\𙣫ո3<@$>1]--a^Lvj|5gxq߇wy R m ^ߙ/7NE6"pMJ)(ze׷7+ AɎ.@<:pO*1LsAk縕*SWXr2Equi˘3 sTRZ-a|O'a|9OfS-&qfڅGbJz/<'';g6'Y<9if +.nһw/gebIj_$"K43&Iއӣ.8O2yʹa;3Fx^"Af"Ŷ ԋaƄ6er=ku"{QѤ¾r20TwIw3JG?ƧG_clB XĒ~;EgҬ&1utUE~H|&(`H!J/tQll570R'-R1$ մr ~0-OޞRRA=B2\;#XC* hq$Άgu:u?!CS|\fώ/Jpxsnϙg,֫[R"Pp8Y:y޵q$ݏd`~T:єBRvE)RH fWrwZٚl\Jxs1/GuM?fGsx=.eSoG[_RoP)[*;ݿ B:6]F6jUu؛IyOm@Q;86"X=?8zfڠ}_[sʒ/9J =qU. &F4wG&xf_H}LI474߱03?۸6z4l23=vf'&zFgU=јtv3U5[SoEKm杺5n!3{pVni~{9${ #5E'B4Fױd$#R9XJV%,f^p958ݫz$ //HRDh@>br1ڇ&*AiD OFjf2HRF$UNAUQzc~p~sHWk[76`z7) 9;NE\{]e1Jo% 1T3ḥx6]7 K׍B}BUC׍BhVJ͘n/xMQ |4\d]hP`AL* i)itgI,|vx=4Syr\ٟ^ߨZSvG<,b*ꂃ,Vǒ- 589 HNh-S.)*r) &2\653gs{-ka1ʓ`vb<.gHMF˗pqq=/;k 'C3&!:ke& go011W\ :9OVedpLo1:A[2HK3grΖLk䠓TUi vW 뭏QG`5jl+칻Mp>g>ԇ8yxfN42U7y"sMP5Y("`hh4]#=:ecu@ ңaBbhmN2xQ !t\Aɂc\+52?u-g3mRBpV>p6u'h|Uv4'9+g9GO">{7d۫_\N>O溒}t]m/}OzZI"`R:H\;L0Iŵ.QȃZ#c1 +:"lur*pP`CeW5r?[!zk?U®'vQNޖ'~_{7DG@t^3ƹ8Kys8¬@~mJpeT ,>vR(yGYS$.&ôƠc*yVJ9UI#8-/U̒]Cm}9ʷ?>5˯x +ad#IMF:}"DZb!v4ӴFÖ,!O2]+ݰr}s&LϞU %;wVY9۲JFzjaJJݐUWP\7ti~֑[Ϭ16ERϴJ,4I;ր7q̸G/?iK׋9yRf[l%_ӹh pB*bB-7:G0FYKT :i!˺#je6X-C> 73˜; +[90=tSM?<[mP&ːh"so S-REX Z| t$ *[I+^JE!!2\hcZT\pRTOK_JU^j[`O|ms*jB2ē8S2Ԗ-Y{&}V>: hy.fe8[]`vB o@!j/bьQ[S!MD6ZeRvD͙%Lk9W3)ښ95c=RMVcuIANb`~SSd|`/ݿ>/f#G HIB< +%ٙ`ADQf Lx.Vfe(ƞO ),!djHFd2̺,!821hl,ZjL̾hj-= ؝8%K̑;xPJa"W~c.&,-ϵ ס>JUB&dȁ(ѐIȢ&d&H:.PZ`HF5+jׇQ(78ee(8hĝ'F:%& IQB'BʈFȒPS\_u#gidI6\&"=RPF,iFz!j 5b5r+-<} 8[6Q듯Ue^6LS(e[?KEoD_ 2" ^<^>C]Y T؞Πn,q矵J ڻ邵G?[ m/f޴&k䝺CY7Ρj@hD5.JNxxGxG޻xGѣxGыxG *(36)OJ  VKsfA&V%2.^m96p!q]<w!EYkäe ks@^SP#?k?^/1lۻ륥xn]]aڏ>~}>Ek|nE:*8OW<%]ZxUlfW7ӈR4D sz\œKFBik@8 j/pZ7 ϡ`J +|^@qmUvn:  {7$'jtؕAq5RЀwq>9E`1l2SϾ#CPրm˰ut3]퀴΄UL k8SldvmM 2'N}5e|3V֣3V֋3V7%KZP<)O "Ylk EdMgjtlЗꩽ䧍l,fL‡O]/@ȝP\vO]fD`]@htC/rNf{&]}>WF64rLqoyk#wj|˻!n?( CM֤yU߽ո 6\2ZZ] ;dYЮQnb s2GЙD;~GHEHb'ޏd`l)GS88[MJ#xt^p )jRƥT@/+`u0L1i_s%a68H 8>UFrvEjЭth~o92iDsBtؙki|cbOuX%rOT6DXI"1hn-Z49& ܚDW(I̤h}@GU̳KЎqbdJuELjڠlmXJ杬,g9PZĬeԫn?,"^Is"DFY !D/RXF[L{!ZO/;E}>3Y0&"8WWLQ Ve2XʋH8'd A8v`'߽ es D"%-t]v09֫ 1! 6Z1u_jV㉣jnzk}x?>#$…aY:2!FcA O -7LdYA*zD)6<^to Vc !&dresƕ0 P0ƽ.[#LY,sK czՒ)P%1 z3+k)N4irA9yiv@,`3%t✻Dƫi*o?b) .x|j_R{+]&R\ХW/"4 /J_zF#'\~㨕.k=JX+s2$QVe0mݩ7YEg"]p]hk+х~Fx}} (>vg_,jY~X"{%uiz;uq_ֶX~E-;3dPo F?haz3Fވo3@_G_hg]z=Dzy:P^9˫ƭGCoCl^)NhahBF@XK]s%BƔV牠W,9*tGtvRBk LU҉ANz Qg8< `C- %fֆc,H%-C$ͱ"O]](+"Gj@+_:R@QԾIJ>lzi=+ĉ NӁa|zZw/0eGk!$(*,՚QWJ fee=Pվ" g#MtTF Zd|MR -,TN؊CwUE''dlRk-^X~ddO.-iabA[WFpuJ깚^eCgjOcdu6^uiqXXQ5U}eY9F Ft˔*>UViiIVIjQDio5NZdmRGAŁV/3^>;0\` uOο`mcA(l8pK_:߃:][Dl]\}YQ8A%*mT*sP4 {>Rc[ޱ?ΪoO^?P-|cU'=yUҸ )4-=$!c]%UKKAK|VԸZPo0UDkZPf*:P;)PGHFj$cGk~:|񻛓oۦkP.ɀ1(gr] c uDKJ[;mֵATR#>8MDTAЪħX=k JD=/i>;UEu+P|J#Z6[謚;[K|yffZe"-)^2ķwESL?'.wSM=j~G溹 eqc:)lSCp4`@ /5R0p:#<3?XbM%(D*CTF?ݸƸn d_5W*_v30/WIRUbh|uOަEhdQJT,[|_EOfO_(yRIHvy\~QYJ#]e2vEcBkSi7ʾ,>b8]7Yt8 R>Flͼd.5G5OW&V7a*ڳ4F )%i6)oJ7'd֛sUOR|2>{,uc3vc(>2RݘW^:[AӲpšϟ'jJUC(&kCoX^GPue".Ps◿>?<Ͽ}շϞt~Q *Yt#`2;ǮL63}hW{ рf`9M(l.'UO`3 #< _}RŇQMyxO乑`^aH[}婐ş)~d6Ҥ(->SehPJcoҭEe Ms/a\w7R2e'@cUS dnm< loF[tg7s$U(Wʢ@Yi%/3_ӷ mu٨C |>ğ9SWn`k_J8QAVzYECtʉa.vњgw)>n82|Fr.b\\-ubJP\\i)#k^<ƕxeVC)M/1ZH\1rŴrŔJrureBe$W lU6rEe.rŴ3\\94d`uE:%z}hq]Rrur啲Ǻb\ML5FPlW(pxXjIc]]^ru36# lq+U+流Ǒ+ԨHFW\iM);6{'ydR&SN*5~(a,-PO'giyi`VNJQi?x]5OvFX5 Ǔlho/xM]1m3jCssTTY"_nH hkubHI9`@F2*,mr;ղ]pܾs*sXT JjVQS%;'J~J`8Yլ,i*k1RPއaZHLf&[r1CgrL;r&6'"`|6rE|.rErE&нJԀ|v!\1뼟(F^G`Q#WD.rŴt]Ҙ^P1.+ƂJdhՁniG|NrEZc]1ƺbZ.WL|/W(W. #Aqɸ <A?{lr9l.Gsi 8t跫7jG"]XĞlscw6=M{csV|scو1kB#+n?vZLMqVɓ;?q PMdI]7N}AȭS3RQ|iE'쩩p6߷G}k"hۄ(wq<7z%)z_q͟4ij%w}_ gsXgi2Zgjiv;l+*y>oѣTc&s1՘:o1ډTT-8N+` 9\rYB ^[`䊁!*ƾ E&rŴh.WLd/WG(W&{i+=avrŔrur%i.vKCD{z7vnoKc9xv+ ?eXzmSN4qrV LnP WM>NduxeֻQxJ{ e$WBf#W{uh%v]Rk^(W1#"`e 2^]1ʕɺrR\1m'D z:BBuעEVw~(Gp˪ǡ(jl]p:\5Qڎݳ- {ohrEh b\s+`.WLD/WG(WY\o ڣ\1\i\1^PX]گpg)ءuk7JéUfCc8[қ|yG&KvscõiJXlZU2yJ(\ dfr l1O.yCjgrLٵh/LQfrJ*v+V:b|tOΔQʕ {<[+Ƶ."ZrŔ`{:B2Ji|v!0⡢ږ^EB/r+&bZy)v>G+ >$˃_r`.rŴ^t]Ru^Er]||W+q3GU/BßOn,_;\}˪ZwՎv̺-Jruߪm$W,Qd#W++}>Sj.'"`y˪j\1uŔrurE3+vf+ wJY_|\3Z䖧N.hi  |%VVd3aC-ѷԦnP?B}ZEPK(^PqV䊀ȕ6^E.WLi{1ʕ0*W,U>A5XWL:/WDDDreB0 FU.rŴk^Li{1 #"`,3FVwwŔQ9Mp lqe6A5 2u\}6rz5N+w >W*Zg+J ТZȕUYI|2W\irŔ^rur䊁2Wc.rŴSvգȕڥKhQy%-M^)u1T8':OiK,Ѡ&_i զf5QVZ0jXkaEF39qu6'=Tdv] e&\ Oθr+rŔ]BG+#tM+`Q,a7 Z }esh>ϑG%!FuvuK"4O&ba =}Pҧ.)Q:q *,^)4 `"-,<盎/}~@BzOIjzn^ף) T 3oπA5UzU` 6Wե%neE郖XY'*M!-hBA9U׽wUi=T/-PBRP P,+Z҃ڊʄxO筽 4Jʤqwa߯iU4MAOu|e-5jQ-ATEsPT :eIkS )Q E'\TQ-ɲ7#ٖ!l0 XԧDhzϹCZے(iϵlQtխsU1Iump-h:hm6%K-}9$\-\mXSZ@ Y;[\\הR#%IE?>yŌixj1Z#M1ZrV>k(fJd?W{d ڦbҥkgind,4T2R ??!Kh*gs#Q5{WߨGJ<D@#.D_Hگū{skKV1\JG:SAd*X C4*ܽOϛ4e7-GpNTG+Y7̶6E5ɍK6J |jԱ).ɇ5i709Fd{r5nj5o3|1!<6Z:[R@j%BRXoݸTYXOkh_VIC2j(UÆVX9[0f %xT͂U.EGV+= ZAwT5,vtF~YP4BZ%;6:KP[r=JNwh A^AkaFh4#:CIٰhD(QTzsPyUobX85:\vAjc4)@?(  ͲI:- d>QAQi+Jj*6 Frf2 VԠ ukzR +Z#q2M!lEP ^0,JLhW4 ~DUuAS[K1 : k<]`&S1RPk'€:s Ԫā 0| $$_L)X1z. mM!g8Z.fX;6@!zjA jhjw,h-]Fp2zvhu< 30MV@5@RMwU| dAy "EXkʛ; Z&`-\kas!~Z)Ahg=>\2 'Ei#8 N’1č_n@bj-…Pilp1:hK2h4&ZTtǢhBM* Cl.X,tSZ ]\*3J?];j0BI>SF ݂ʄ^݋[Vw [`JSO5nҭqdΔAOU0auϷ_zr{ϯ$Ͷַ 'M5r}_N[?ߝ=LFoqrOz۽oe~Ss,y{zM}=ŷ>r3_iZeO^U_h rpUn+ OWoO׏%]'_2A3?~'GSLDL:o·/`댉k+cM7z5F{T70J/6u9hM6_ics.&2 ttZ{#+Oz+ʮ=g^9#+ufEtkIk+F^]1 菑Ѯ 2ఞE 7Ƶz&#+~q++j&Z͡4r*̚ {[ ׬fG=|3(ܻЕ̩OU :xޯ}/N=!C]%FBW_:UM++jgf?`NWҕ7@X[bάck^(F,TdML:nmDߦ{\1>/y[_nZ+DKӋʿx w6%} krwZo3eKlcr$vo={I<$Mz ٸщ1QUi*i~,z|HڻǪ^| p/-zJ.QgM-8]KAv3UmJTc4?h2_eaA,PPLRnchpQ1` qr ץ89Fա;9F8#trG'c5~h[XNW2:#+O{] 7]']][d+::]1JRBWGHW|rs++ޮ7ܻb_t i*)ܚ Y]1\Zb7M1(ڏqSoyw^{}Wf<(V\cDGe}۳~l/&./v:~ 'ϟ7{ 3!Jp't5wn>#Oo0XR{oO}wӓQį.ÉߞmiL)8)7T<9d᫣|I \ݼ.敛_m/Eu}y[>޵uǑ05"x跜}CNd7//tK9MImؒB@/\v5Yt6ҏ}G;?@z8|7';?y N{j?6DžzH#UHF+M*ΐb4DYT( "TgE)S͕| .1Qz(Ӿ8ݷ]Z|;}#i=U[}4_L׋+jm{v~WﮯMҿQFE+vmjooj<Ѓ~?p1 X#s搋G?>ξ:p9)/\q4ʿ82r˽.#cU1ֳlX2:rAcU8v$6+"D.Ţ[BA4ڡuF{H")֙ъ3&a.'>Q(|E. gj$Gӏ qm|l?|q(o1Ƃ{kc(z)EEki[in1Gb+_VOq\ @ljTl2aT1G3Gmw&ߞ R'PcP]$NdRqD"":LH5Z_M 92{W5)JXJ)A ++Z %Iΰ8a/ug(7dDV>`o%c<n߽/[m?͞ d}Qd1E]ii"rI1Eur];:/[D^,̮;h*u66Yɕ$2RKÆfU<u5q;s@6WrugS[WQa ,&f ck85t;|3Jk&E'o]Q|;H_&̼Wb݊orTe+y eK kבf͕mQSjB=a-Z,OT$)RCSB ]lz{nJ7_M3N/>/HԞ!aEq(?* _6d24;[OAR\YCd @VxEptFA|Q6UyCI#T3}!S C#{h!I R@z[PŒP lb*#u9lQsݴ\`jث TuIoUOވ{BJ܀}s"V C¢a_SXe/P(>J>TWTRcgMpA 1 zD##qo*EɴΨbߡ(В-Z 5f cmLfԦmih@)V1;kzSL*)e0,LZ[4VG&,Y^u8VuNnZr_ti|G}L.9ND)@EqRA> $k.ё52#u(@m)>RT吁8 IWbHF*FV\t/qWa۫5fzp134hL]XaҗFtˏhyuKvׇfwgČx*Dl=q*ڈ`2#U1{Li7w׋itoף?vj7Ү͍dF(o /|9~׫_p=t#d=EWG}`0kbOhv׳E.[d2 ~m{;5zk}1vDZ@:yE[>r)YP* AS/6ԯZQTgTg&1L[.M3bAVթ$ X @%bbqJ Υvl>|z.j/lq5z5 u{!6o&pu1o .Y-ixzX߽ @6Ó˴4JAZ3v~}ÝV0N?jQ[W+a=O"WOX8_7m^4!̳`n#7%n@%|׻|6s/!/R±/XB1#m?NIm(轵yqM`.ิ#!փ J3w!]I#LƓdl(Tu)hNZ(2am\eֻhQy,AOɔV0h]\ P*)վ?:Z͑ 峢 928KQz4*sb&b!c.7kr7oW'0K@}}eyp !gYԎ-cBm.xEݹ B夈QqP`1!^[ֹ bόuSxNVV ldBdoRC%oZBBժo  'Dks {Ɖ) f@k{4`o]-8GTzF*z}!ew,6A/LqK[=o}Ufz7˻vח{Vۥ4|xGmRtt svZu5WŰX2:AGaiM5nnx.<مeu`9 Ml >SLd`jgQ(71 d[ru:hbx#ʴ}bb08G®fP}*_–1*բ!R40α)UM9"hR2V>BM |Y娏ӯh + ڨS p咝p,Rv d1mJ`#dI#OyfH!Geu'I٘|YQfPh)jR HS9Z1ۻsM'N0L5FEDQ!-:rd2PMgPM%uX@Zl1hm#""݌,/l"f|Ǥ#caRCKhȖ.YԔD2*)QUHhcnru}Z+Ambq{G4M6[1R',DKEY'$&lB'lMs.v|)%E{mS40l@$X)뤤3XrRzИk=5ͮed`*.bL*:j 6cߪpsPYA|Q!Ur")Y3)B.v#W7e`(zjxG~BCee֔$@24vRM | cꀜ&k)'#"DԪ! J[(d"EJ8 `1_UzM#} AFK9K)XMT]էIAa[G,tbU526[*lvBzoƒS>)e3|4/T#(҄5d;h@%vPa:6vN6ƪ"R)+.1Z.X%ֳbѨ46vF66vr5l.t(з mߞ`|p`pe)nSx]1g=]pB`(ٚB55W=G&߳C[O>$kZw(:C"^ C/(R9YAx<j(*D9`l۳NFG{w;^˃LO=[)y@WLX\! W%ۊ']"羚k>(X×ѳFt;, +(2r l4{{=.@3~׼xdPQPjM=ؐsRbF9ꐫs&֬UN*9V5=0\-4RQelRU ۶E.a<R_C:%yglkknFe*/{G*?$vmMɃWU5dHʲO*4f7CQТd\D`n4Q:cTV(bX '^ I88r]R w`gg>xRI,U+g u|"ga K(,k/5"UJ0I^qw.Ӓ}Iۛ䁺.Vsy0'l V`$"aIA)Ȍ' /B_u(7N[lBMr ɕJZ\(+κibXf私g&+u;_)EI=K0/핲ә'騒фx6N[=0K*WWFݹ*U&Oյzr3ŸMYdQ9M g|;;f.8`?_M\'禘Mm?/BM=Q{l4wuc7KŠQ0iN>(_=ٻM N^ ZtM6U(g^rHq0r1H%1>٠!F5k&>T8s9,ǿ]ɫ7}w]LT䇓7î7Ip)#h# $ CV<]]S6Z9z7W9~o(5hoilg-Uo\NoE?igVIC:2 3W7QR x2]R( S¯iܽnv,u%%tU߸Dy#hHA`K{P1[DTi,?J_0y{39$I&m@albWJW:nb0mG""vZ9+H3xyD5= fZ֫1LD0RB4})g +~Ukt1X> &bܝy$"kYT: ~"z~k[6{sgH6yXQ=me[cR;ĬW#B ` ojRz\T p%hE*ۨdITR)f1V7ͳUߗ݆,@Gy( K5Jm(˖I.<{sWozջP\A}Ir"P$N$BZU>oE01QY5^X@I0MDP i "oWUE*L<%P`{I +bi1g;JRH˂5r VKx!K&&Oi ; U [Z);f7SyAS|Qχ3|c_%~nC_5iHH)+I莁Q!!}D8hG<^NgE蝝 8,Ez pJ"_e#U@p4 ܠz~&7!I) {_PY ez%BA,h4c uYVh̃Uv~F:\DֽO0<93[UQJը\{:\8Xڭ m1NzɴNALaF_&rO+k@'>N{=?:V%AͿ/v|pUu 1/mW{)Nc^ÛZqHSbz|%'V7ھ>y{FMXh߬؟yw럫IWEN4ᯙ1X;&6.XCQ}\}wPrݍt$MwLH܂R#F6LP]=Ý17c޺$2Q㲉M/k:zo&v/vm8mM-=Уo<>DLꝦ,J1'HcD+ F,`c+PB){+g= e/HHB!(GlzxIjՖQ} M={#BVg^Aٷ6_up(_Z ^V=aRBɸ px$p UgX{q5ŅxH%ѫ\Fms.Inxp Sf6=I#k! шZYR!DF'#1*O4"!fs[^i9&r0 EέRȰ, Ƹ#aREb-}JDc^ȹ[K)E!x}HQIk`#Ѷݙk x&xP}IH.mC[Іx Lqs8܃Ւ3HB y (p~_3qKZw %1iw) Z̈,wX+ 6 ba]"Rcyp /Ig19ܥ)%v]NnsTZ ˺׷F|u7@>0 .1yQX$4Zݟd*dˍ"b9#MR[)J`TUXOV(wKRu 3*&da/"XVu;%&Qx8}0j-S|̩}Rh)H}LSD ~gi|ש ^V|Qzዤ6 6RU[e]uj[,(W5ɳiy1uC3mil5Uĭ<'㦂ZuW$ґgŜ aIN VYC\sKgsgVْfb[Ln[S:ᩗTyAԊCA2@ EdX%LVKHXW@6!`BL1jveubk-Z#g3iG%6(6p5r4-b?[򭐥SѬPx+Ial4&D$(v4 !qQnAU ;-Q@rRgG/6*f8Rn+帓F%Zfȹ/o8OVYYJKɈ5,hȀFwmK_E `,mM8+}T|H_ )2jZ#yäf͚a_ K4O4 #4\y}J6r c?z.v^At * &R6(KQ8*q):j9\"8- U ,QE_\+n)AnڇYYnn}xwghJ؞==E>w}fdgF}f䳦>#Ե3#>3ό3#>3ό3#;@}fdgF}fdgF=,F1]2\xK̢;)hoMIXkc-<*>P`&)UZR(%/]a-O>l\`ۋFb֚Zkjhk!u|GG-6 }b61<%j* NxK)#{Hx Ǽ(P? ;6_(=Ԓ}~ &,x7l6W{n]7}*&3OmhIb !@TIԂvML88QFFG(uns1gXEE @\43NH2KbY@H\D"X+Pb3+ Sg ˅iĹGq}?sy6d\~Ev І\2;^f)?? b5SϾr#TLkgq65]4dzLߜW'&Q97}v`zN1-?xeٯ_|F0hAWi0Dlr6@/DFBV)*;j 7odzx1Gp< 4Eӄ%N0o~y3("!;ԜsWË5A/ B_.p0;Lͽm\!bqeqxg1ǻҥL|-Y$D#AYrrMvОpL.fP5k̚)n] )>Zwf!Jlfn/C}{O>b,@#*otbptb-5"1hdT@"jje^cA(OL*C4$MQ/,Eў9^pocpeѩ^3_^td18zLRvpu;Rxb }-G#-I*#=gV:> =9#}W:p\ό) Z2&%bW!G"zz_=btұ9'7|=v43 T07c hJY:12cr1Pkd d''7ewޞCi:E}`Yck^~7X˅8VKRk7jr@\Yq,(N:TIV+]pwl>6/sݍMNt7^{0TSfLC_dT@wPǴPQ Sި(!j2>罥BeHՉKYJ{ b3e'dw)E_WOߧwGjE :y"I"yn@{Ǣ 9Nk"! 'k)d'Wyt - 2%"8hH4M U^F2PU)q*HKa4;]I('Hx@w|zݔy %|՟owy8 f&dT-NL^ITGW`hЄlH,Iq2"bB0Ŝ $A9F%E4EFet1yOt$`:&!]R)\B*ُJ1,,b!HY/sSc~md,},,*6v@Fp6(e,1ʜJ#3Se@?Sh*gA OE'Lӄ|,'!;{.2tr=CDF1'&&,n1$)ُ~2+.fWPvtY=S+K!jbƩ_>[J?ED^y=" E.\! qh6F0"2Ւ1\(q@3#WP4k(!J A ҔG-X8FEunГfBsQsh\~yٲb:iɡ( qŝMd z"5sU28# 363%"˵"!/r1q<,*0<<V*nȽO1ڵ~ >q%eD}o\. gȉN&%lJ>jkABPY7ΕCxxGxGڹxG֡xG։xG$I‘!gBOQEMMA Fj)D'Q2##䘍Gx G* ԀUzetGDJr)+x*MKs-.w)5Kζr=p1u!WAS3Ls~u ,{]ѳ-u;$竏W9E|3,V_ LM[ڶ4d۟ ?EK\r-<wn2=!SBi{9>o&8˲R[ hdpiqaB[v֬k6ArA@S^U;@s,t1t8nn$nomrYrPܡeoTVͧ_]& ?lg~AOWaхipX% ) ttBm'6tZЕ⯇T5źևZ[is5aj-?EH(6.{:/f6 Fy,Yo})}Nmi9,J$*  /t[(1qmc+a 0^eBI%T`=o {Ǎ.-QdCɞK b?Eiьl+߯ÚHjyFvujvUWbT^JlQKXJw S"Թ:L )wʣ<5jUI3 w&{|-?W-?j bF*z_WϾ LP?ztX:ZlIҢ@))PHpBWE^` D!lfrh(*|O<(*',?ߍ7xEMe*P iyeў8h"j 91(&֡Gi;?r3)@:c] óEFƣCQx5 Ad"rQȀ\1"tlѝx,N% C(q0ߢ|ZFϺpm,<ׇʵ@-ظ-h}6hDͼ; IQVAXH66lYiT2$f;!) v;$E΅ Iq}H cP/!:l|`$Pz(AD>:4XL~϶IFm!zlx$*6Jcd=;vs,ǿ:9_Yyj'EpRh!-$SȶZ1"Fc@QhR޴k؜ZRFl B‚NB`=V3$T @VX_tJ4Y4V [$ \JDd%KATѡ$%RNS9u+߻Aw REgCB)b @Z W@&|='(XR/H CB!!7#mPYrT6,5ѡ.yJ1O0MfuL= ӓQvUqO16Ej@x4$WwuX0}[:8&eYwN9w$=dZ" 4; \F22 y6J" uNc(`DFE!wo_#\VbHQnzAM> >xpjm ןIn?{sU_g Ri͉()~ ókΪy7--5UKpd+PlV@[YMRm R96 !5W]Ĉh}:6m|ȥQ_O=_X{(GYwl׃+1>3uu֖wRu_|ķgPGTRŗ|qW.7`sN z8D%w<;WR]PIu%@F'}42eHsMU!TݦE@dk"CHOăKs 1z @RQɺEׁaU<:쌜 7gyvCN/Z_܊Г2 m-uӿpL_^ߠe=.gtTbiq|޼Y|svS"ג6$T {7c2_*} n߿nfw9E+f!le3ͻ]7|番7Jp:oqw};oq,;:΁gG kӷ^ed/nw{ܦf=tkڞԭi&Y+vU˭ͷc[ጂmɇi?&FjNFaL h-@jN|ҁo jcjS^h˺O.[L5A3Dhz^ @WTlKEѳ=ϋ9V UҦ`M\$dLO:3k((S.f Eg-O (8Meap!;;#gC4̧O;c3Z9y?R[0X`ݠғ 2&Yi%!#S6Dp6وXcj"Q$rXȽ':bPM9RYNy2,HZ>Q,#Qb('jTB9w uOZV s+r|V tgOٺ>k-%[i^YZ u3]dj Z,m׹N Vi_>TbQ,-#^?_mM^ ӷ照bȗ~ ebWEW> 0Q[墥 ͔Q/ k5r&m/g&ዿ6ІkFZny궳x%߳I^Na|2V7̵{2> :9p8^ P6?sv~:ΔK`E|o,7ϖ_:̍T~> kv9Og`iԳQW\4E]Yv]]1aAu9+BlU%sQWZꊩdUԕXuF]Ur<uZgw]]1htu65Jn]]ݏ`/Cex]uu?j4;=ԕC?T ge6ۿ&_Oj,^ ߯+.Җ,>6tj !mOҦ6A4 }< Oi@ܚbrT1'PDM]ʇnݺ]lX(PVUT&I%vڛQ%R6ѧr||&yQWD?}ӺX'wX{0V_T *ލw_T}"cUWaGvΝn43^~aBRdh4!]F[XgPq=/Ox`u/(G\ew݃h&T ]BIq0{Մީ IoAQl0I:Ԧ}i]XëQ,<*ơf^HZ&]4AH~ei֝ 1DdNAIdt>x터,EʢMp9\2 s,Vs_`聆$,#țb"XpBR|QIrJBMZwʸ}B۵2Xd`2i%,Z>g hK&/_':66Fƒ-AvPWR>fhOOq[G`3ܢRnM&ucams bCT(3?{ȍlJc>G!`pL7& s G;H8 [i[``f"U ݡvOwbd%LN뇻<^KrnY0V-ZȯE% T8buJ8=R3\PQRE 2$.p $I[UI$% 5Eʙ_5N#g;B{hhmƆ;kۣn:~|fyrlFt$S'>VdKwp|{Kus[HAH[ 8yZbk/ӳzy492;PBKJ"B8K{E锨G#,Jӧ)n9^@LI4[Y?1}2u۪f= cDZ, ɛ6|,|/XD:R NXm A ,p%'8c{Hx Ǽ(PYb6n~^R?FC9=i락,ق$sG{gurbCcSI$1S 6q0AcK*="<"|=bs1gXEE @\4f d 8ŲTMj>p;wAǻygaL>ۮInn,.In=๥ddEȉ L{^FT6: 1VI TA7GNI1r<{WZƎ>v~GF!:M"HB4&E 鵾a=RFYR5:{WQ8d2MO"E-.  BK%@<+pWfxO{Xp`mkh%DnE \g()M>eQ>6Iqkj<`*¹*NJU.2*&{ӠJ0p8l~"Ud W ՚*5;4 o>u`4kۺ$*0|Hn%T'9з?^֬VOoguU~.KkkeD:6ٚ*93>6d8^(W݄YZuvͼ{Y$jJ7Ї[:OxL3gpCv9Xo4? :9ZVTqQ{|^ZUo#$w6ZKvT.MlyܨAu߽ ɸjEvhC}l̎7]«,?àz?LM\<ҍ\NP49[ui>|z8&ClIrj#.㧣L/)v˛l]'tU[A$>()=`크RTv0'dv8# 8sr#4E5?6d1 Q~ϰ])^CF)8 aPs}NǨ>\r/ϱ0µ?~i]ǿ o*4UTggX~]i sZKob-;_Ѡl֌۟fr-”_Zo3%?<+*::}/o7m@Fَ#\:zvyq[otrZlSs T.p#/ESs1V_7ᛎ8.qY'nؼzbu/VÇ !l4vo+Suzv26Hԭ>^;Ϸyõt96$uX$D#Ar頮}YsKeX&My3TabMy5u>nxr{`S'++kYb]--=ч'=&Hr /1"hұKA'f G#R(XFFE $*VEN{$*^h!O^w6e&\:A"3ǽ mLNYL1:Ճ^#0zdl+66vv3R=V=p &ݍ` }p Lҭ_X7[,_n,wg>z÷U*GEulv/0|@\UW4s"< 4^d;R,R>\; B:cj!=W{%04s\SQ2o@hJR;]$Nd!4^4tYcwCi GcvP*Ohؼy "$hF`_-&NK^χG!& 5/ٚ9BceqFj}4\VOKùur썹'2 UD<;RoZ[N! 8^{\(gX{rԓ%G(瑻't48S@ d6DMKB˷"z_/R8Yr]IQͷ*8_,|؜f!ϷqA#9Z,6zAV)B#FRfQ.2Z*dBiȉ6[ 1h4FÚͩ"Yv bo{iNjQv6ۇŖXQٮkm0l#JF,ӱ Ijgi 4޻`fRH `1m=W-RRZKCiCBE%O idbo< !V'S.yg+*Lsǀ!{VnS(& H*T*Vtk"*8"FQuШJg.&v1_hP.޳q$W`l[CG̳D*|V߯zfЃE,z*{~ȸm~ :]$=lSMo^߿y{%MR8ʄ̱.[ZLhY[N<9IR*Ayڊ&)A,e}K""[zmFq5mvYvQ2mZ=*RpK(1IL8lݤuэⓈٮ87/[uWc)3үvɚֵXFsm]tl]#ٺ6}[mKJx+G_]$+(0A) CijU[#ǜiLGpGm`*!?U p&q.۲5p֌izy8rC6MK1sSUi4u}t9?5HԁAΨII4' $rk.U+fRW]loQVIY[)ݸ:j3o?y[ Ue|_*dh1Je1ϰb f]ejvbe-қf@+T@N:ϭc| 1D@%Z(UvC:(BT._L$ pT=o A6KQ?`97B8kCCm//`3?yS\k Q- 3n|uT.Cq:.`%n4(HI-Y-xK9ƈ9eGQ@KZΎxrV4jtp$3*@ʋZi*9™""5"ENYfEhIALc]EJ0ykb\6IlIm-S:Q5Q+;=ӓfĻ׹!RpDL"WOE<(!" :aL"rFxr+6ZÉm\dhE%,"rDì2T U4qj׈hso΁z&#!ÌN%gK2RnA1]UQIn},‚r2J$Zh!mKR< !N{ Z{I[ԙh)#q́ۧVٯ-bIȢ.*ۥƋm)$ţߣ&vSI{zu_ip4Ҡ!x]&8fC ^OD\"? w]3 ʼ"V:Ѡ)N8 uRx)1I9(—:Bj6q tE*LkmU^N)E%)һ]k;.:F4)wf<:?ր_CM(ѕOz620>~Hmg0. A`KSc.|ʍ29.]aTsp0}SY],xon⺺ <\E;ʖQN$Au er/VShp4F6xqlMElSNJUq*VҬoM/:>WjB|E'xNziN{=1\KԳ \Fŵ5y6ᯕOɢpvA-ze삁$3Gן-9_Bґ{eðHtETyd?x`^|WӅ]ƫ&QY=tF]:We:0aJ!y`*Q0!JECZ t7Xթ%tE q}Y_.B#w髓_t7>ջS_ON߽ Op`8\ _nϓ0hkho:47bun|qmS^2-Aƭ^qy~|wh#_̺k׏E~`#_j ݀JHT"V,Ġ,`򠈆k+uă$3ד8:(mBAX $xxb2 %=*J+R%c'}mÔ`m#jA 8BS9zRFKhd4z& {Vuv*5F"76y擞\:ם'qDIbZw٢2.=딴몴B`[~sפ2*>I:t8.a9v4M7&Y+6+ֆ\;Ǎ4ᣱsx@ Qy "$TTK岇ސ\((w3GX(1X1tEǁ_]> ?#PIEqk\7Oe[_saˆp`.ݒg_tM_L[n$W[iv "ͅB-bF-8F+iHF/z"[QeoCf_X:)V]'t=Nz"ISBl_?J>?[BD1/y62KAtcҙc㉴mbK,I_.8퉗罥BepDPܡr/8[g͋7-44?^U%'^, W[2MwmIkpVtI. 8UY~l{nN\>FŗӜ |ƶtYZu"7,}%[urGt:ksYu-KlY6غ~Vwh^煖a8ov|hW?ࢎ(LvugnOwwئf5ڎϴҮjLsN%j{u0: pӼt*B c#Xm\P_5yQF& ~F6`e$Oxgl_q}yr+h9 Tt Hdp,Rx+:Ԏǩ/UǵR|cdU5 *އЦumyy^ٲ8㻹e74n>mEet.Ώ7OzӍ-.x2j{'o1U'?l j0@F?oRQ+X,ꮳ ®2:B(nvٰ+|UBp5®2lUӎ]=CvJ.]epvUlUR]=CveRB *+پ yvcWϑ]Y1@DW~D: x0<|rWR!kPTNm^߿~[ģ㢾Z!fGkR4N֗B+]Kt4ly usWSP<:CN9DN&0/GY$0Wwm͎?D!im$>`LnNGƣ [ZBU|Z˵ WvZgϬ|ᆬY! ZZo'ŋl\j&M}l_ bWUW5X w0~9[v"Vڔ"3k gJ푬3EFV/Bndۑ[o!T'gWQa'V-OîŮؕզ[O ; WU}aW&bUrg;v|36? b 2®2v+RpұgȮ8Q|)j1KnkPlҷ' 1U<, zad Xcv`\797 c!RB͘s!m2R*t/F,#0d뇈 uDMA05 [q)R5q |to޿yb=CK?{WƑ _Kr@MBWwE&$% ߯zHJ(iQErÚꧪ\v2qr7 ld'Ǩ;TdȂVg:Ҁ1Fi!{Դ@lj\7m֦AMP vZKexBBc}1oƁvy' [mө;RW50en<4Cus3hEW-!QEEPJgYrL*a2F@D32Tg3!|vQ:* 7U@1BG s݉)lsSS~}X+?NtV2fT47&{AqYNN(,SEn\^MS2 7 ߊa"1 "Y'pt 9 KSDC%(d .TVei"3K<͊ |/I9 "H[ZaRd2]!ʆ81$-s]-<9!xGr:U@qS_7;l|/lx%udւNL^d!$y2,&hfcx&eMO-fy$2 @c"cs G˽Bd,{ usAj|\k[*Ge޿S3g/l9^k!gaIN`ʥ`1gRb19ƳaCz^;uZtp(`1j{vCx\uj_J;fތ?݌Lsʴg3mwUxX&1Iko֡_ڴ2)l}kTvU}PŊ%tq]L'e[)k &j8LJ2pM4UlQ$A5NR9*ynW d1Sr׾7x5 âlԇئ+Qɗ"O烷4=_Q|~.&IH߇QG gc_Ր/i~L(=Y73tץl|y~;7Mջ?Űxq?"J/iKs.\ .p6/(97Фo '`0]NƉOƅWr[ڋ,KNg$NF8db"q0N~#R(dz<w?_-9훟U;.qo??^bpqu5LKh(R @rM,XKIe?լ?q sy;wW|C}iWb}ƆѼB~qvQ?~h/8 ޾[V'\& R`I6F(Kx{szܫqyNm bvp- Lh?o|_ |#^Hio-m?8퐄#OK;8-+Bʎ.x].Vf`Ketkwn]ڇϝ\Y_8L'B({ظ{:L,Ф]MjRs5i|]_r;fNp"\M*#}7Ϯfwze W'YU#uګ]n.Fցr |>ْg}'ϝᘍ@FYz$'lsۇfHK.F95%rYx'-z]$ؔur4yU>RS%>?1fE*PsbpÁ1ˤF3IDЄdaDAI)#e:Lo2c~qq5}tݟ67Blfڽ!Jet2ݟSWI>(9իT`R_$z[''?J;`0+ÒO: |ħrgru=VTl[U_[fbu]y $ 0ȣ#'7O:u˔ 0i%Icp<~ؒ|3GMѻ0/8mPj^YׁfY񹮌* KewMP^4Xcvݮ='(Qq0Ht$l},^ggfSNAQvGrԴ1mm",]2w")HǸB[MV gYeC(;3v3D.KRY8% @J&ByNF({5qnl2ؕ RdH$'WB{vJ._\zW)ȟԴ*YpWPzJ Q(U90N_r/eUކH r3,5%碎6AvT >TX唔%ZCἵJ@Pu6ɰ@MDC2Y9e}"RC{Eb_Ҝ6ɂ!ZI7`3tRkBN;tֻ"ÿ*Drh[rNiQmĴkϴXOQr;rw|Q#<^(06!7MқHNIi{|i<W}a,[Ofgl͛@ͮFa7pN6*0h hXK2P=]4=oz³~_9Ir"0z bYIF:͙4Z+4@UKj1!J05lϊQJG"s1  5KY@ 8N$Lȴmy-L436(4IF+F'AOqr~@`T6r@NkLV#NǠ$d#Z%r!+I5}J,teO A,5MXҡ|zNiDG1_b*g _S"KZbVnS|J{U1AJ4!<@nd̉P(Ԙ,6tR#$K}SpN'QzD$e o]I`%*D 3!eDLBՅ}*|; 0/[ti15jqO!(~]^),O A<yf%UDf {d6d.c$z38ޠF =. !(\dz TcMe & x6,rWͱW_߶Hm(wgBSt~h*\l~KiFr 6J1ЀQZ9pޱ|?nŘמּPsQB^&Md01֢MdRsc03[5sG xYZ%UK$ST1;n@ 2&n]Frm)\NC'ud]Qc~!c^Y˵<:z(g~\~G f$1Cl@΄WܦeC>&a,-uAZ,)V e&d"J+WHvuMMNp(({gd8;ݞKcrQ WX|őo!'Fs3! f-B e䤾N]ġYM4t<5rBWWTO'ӊ_K="^OHd_O'\OrrQ۝z9>:3:1hQp.ಷ&0"'~t'~t'~[ BYG,s˭QY$u<L2 WNYĀhtR +\\CeQ.q5qd?4٧TDʽf+"B=qhhIJ Z"YRrechiȤZynHZ*aGT@Jb9 $7kYJ^4C[=|\rI4~z}N];owy/9w/:ȕ?jzִ{пQB3S1c,#J%RF ̨ k}U-q6X-eKDaDE){ Ur&w Fε TmXM-c=RVBQ^{ʽ- 2إYlwI1<+Ã< h8_|;rTyRIDpRr"?SY9hRK(33ҁ<eY0g/$AN.BJd6 e\>Y%s> Qyr1h]ybd6\SfǩXm[)Ya3࠵E_b.ti惑4ύf9dFf@ TY"8yȄ "/:Y$dTrp$EHY'ɩFj'0Vq{:kD1'ck͏C-leo{9+CbeRo(* "ZCx(h, 5%U9$+C충 A 71Q,[, =0/C,QlP2򗙕E1H*MyԂg|P`B@$̈́NSr)5bml|w ⨜ W|qɮzQ֬E[㵉,!RO$&p^̵afKɲȤgTbIjp z!bSamܱ>T5C>| h gy7Qs͎*bZ$6Y{?H.N:#%ǃ+?9ܩMɧTi-RHJ4+}F\h9HR8I8Dh3!RTQSC%FjSj%93^:9f#59@}2@ Xg\Fg}NH4_QN8e%#OuuE^nRR# 6\6WoL5w ~K Ӝ}98wouX eګ{W#za.qIhc־=S yaw3[³mg]c^QL]9pu"7l/Kl]n]N[o7E͐g[r޾M{e;آ畖!_f,|mx~DK9geyRW޺BѷMoaS=y)eo:k b[9֎_z+Ҽ(a[6j@~лHl)!f̮C c%8\rfnAHъ@"{BO5hY u1om6B RqIOc)`U'8i!. є#ET"Ἂ+@ !X d3o1Z=cɠ)Sj㉝ʧ\dQ:[yD,"Y_́piahsjKQ|IRsmly JJs$%\Ѐ'fBRށgkDR 1jI_[hl+,NPЍ9IT6>P\T#}0Oftr.Y#]j筼UErU1F9Vƚ$c@$K>chĜ/ZHS #G9t+c !|!k?t]}묭 j݁{r*zwtIѹpߚQC`edQvig"yx jA+0Ӝ "u1!FbTC}p%cJPh$):E>n94ڴ?yfY.& I@`E2^G %P!yGfDvYkcjؽar۾pU10i-q:iCCPp(QeԀ־qwǩc'uf_lJ@1%E %t>ec6$)K w2/KekOPe9Mw‘`NIǡl> QPB+H .)"H3X &&FA``!iJQEI-"e\Czs r$Ĉ3VY!!/~a׀ԇPi1C ?t4XLzSCH0%Oi.|ʍ8.}L\w6̟T^+\%uuy8 QS Xe7BRp erD+)n43߽8Q BזgFp";,B; w?mfr5q rNH'Wz'Ǚ7S>.f痾 WQ=Ud6K8W颣]WJ!d +mޯ%WDŋ SB`x4*cqfi^VnߧO^WOƧ^g׉Ԓ,^vNN3rfW$h_'YN^bj{&Y7WkG #aVYa^Q8yNu/=Z97kGed\7wUFŨ\^>?!w=JE?O]bU *:"tȎy˻W/^~{~7_?2s/߽~_ppc8\K7 ߋz96CCs#Zֳ f\[+QCV;k%@,~|}ދ/;iLr~z`rϜi9 z (gZΑ0뢢+T|cAU8 1hAf;*Gty-q'qVGMhh ¢!OLFsGB˟zETRXdlqҷv8U^ ]l ? Q-NrD>Hh5MLEqZ:: _O|''w|<˗^\6ԝ ˲ѳdO3aX?yK&v滫=`<;xvwS~^}"~Ce܏\-7/GKnz)Q.*A$YRxU+96K,u98m7A oWiB|vΚ^NYvL'y1-bo-rtL-x;j.֝͞tz 5Z dMPx1Ơ;m¨]rcqhlA`~ki홆=$vIwN>j"r<tS 6D ޗ+'Z2DNݐcY9JwϤZ8OI;seqq:h.'/2-xbꀑ@sJ>K,yЂῂ6*b"AdHO58_ #н(7!EF| r' I7V"Scls*ptX\9Unk?A[\8Xs_r̬{`r'XH#ϻ^Ē ZAӅZnF+V" @Ed& ˯Ѳ:Nrps_ gUi]LeRΥ;FFZ|UWkeV֭Vfk+FWѵٶ֞,-C2WU  mwK?(~~OPF^uōR >{na 2LG$jXZk$@*-m?bzD 6?uɕⱨL&MWWJC[u+K+$X|U&WǢ27]]e*WooVWl;т};,~ɕt>4^H nd{u5m۪]+~D V\<uLMWWJkZu+Fa˥%z{E ꘪӳsH&C wS4Z8_B&VD_*h1QB !OzxLvTL`˰Yua>n6pӲ\`OfQ-2A(Z K\]s"ˣ?) -]1z0)@/TɅXj5j pUcl3Z`[;,t ;0*9.>Ό fV͊޿(ΞCc!s[g2w߱ѓT/{@/yom+L<;c٭Xpΰ5~R_ 3|n^Cu{Q/rGٳ h ϝO;CY"fƨQ?{Ƒ ᗜWx Tyzi *c$l<&A&8^a.-,BCO>׫5^gͭ6V99~Ysc0;"/:xщ\q4^tVCu^ )ޅ̓ OKP{Ŭ.Ym#*KU F zqz{խg^`R #3NNfL2Aݷw7FXGث`7jX?`ۜ(ZXB+sZkAuz#+!V:!*mshc*lл-X۱֮Mم%Ḹ1V+s%mf`w:1Y:W8+m}dWxw.xK#|vOIo@axrTO= !]_or 8tosyD&T W0ՈWNx=7xKdUs-_RL$r" )8߯56==^QmH)"sXƧgWΛ١mнJ,>DbDzJ|Jȝ5B0vUsr{3vFJ.dY0&t%Fxʯ$r>u_~u+E T.,ѨDǢTUw/R])ACEM"뚵]$%\, ~zk]xהz}$8i1-`NBK .cjMO50>'0|4I.'8]hۯ񷳙$u(0c&}J6vX P1OZf"5D=ȭ" YN{ 8H0y,vZV ڋ_4@C@5bGkRW墎Ƨ\Yd_Ls4X,Q` e +gI1ӨNvhMxrm3ۭy[}3͍F"rE `o\`sKprN<Q}&UHŦl!YK5\!h9ԋ V߻Ԋm|e`qܺNnȶV\Ke 'yZ޶AܬGUm%uW)M`G{nobsl(jrkˁrAGXFX-7:ZXӕi39_ `ha)1Hu7As߁=/8R0{B: `MpDDS86Pa)<3 c5"k*c"!@ 98GI4 $Q6r6#Љ_EÿVP9An6w߿1̈́|saכ=0#HFc2@jR pb8&J+a^e O"L9RGSza.fj5Ϳ}xu3˧Տ9̠jL dpAQ#[xQFRFE-9ۏlw9ꀜmuζ%KBɅ.OvGǤi#j x܆H&`F:2(:MDwZ D佖豉hj4BZ"2;lZ}H~NǷ}PnowTPh(h  MI16R"BAzku6\s3'20ia s!lE(J1,,hT.wbh.r6 jEqaC3N{{_i*q<4O—Ĭ>4-kCTнa;i6]3;ף~*IuiڣuI܁'sj}چ'%c4v˞O{`Zysb av1bk 12+4cD8 $0su2dՌB}JoB) ~bGmAW193Tn͚͘1Vɦ qƮԅntcš4MB-YE2NN_P`t;?klJ=쉢 Lb-qYČ]hD*K$gbYhdg<'@J6S1yISt$L-92kllv4 Pv68M:!ص8%ҐΡsŖsKjg)=9|+(Il#56fՇ iU3b^A9PI9t|`#Nudև٬K~\F5"4bמ#0A" ȷ(7WbbIh:B_yOzio8 ĤgN x҄Ij$f.vϝ-\Xj zqR̄bٸdW3EN/x<"Q#Â;"J''gp 2 !tz1lܱ>M> ZǿN᪴G ~ٍ*L" K~dmAٳ碑(O-} Q:G OL M%Ou5/%PQ});)RIk&w㩖t؝ sD)l#N)!р$IBsl0eNP+ {Rb&y$8FҎY | \KrK熽2K֦TB5N%}WI7_lٴ_ތڇ(O@Ӑx * ,b͞0:#q5yH5pYTV([8 sRK D̩H%d́)np/Y)X*#R"%)AYYSY3K`@;+1`LSl:mnO?1xT .뉍ß7`+.SVJU/ŒfwSx ~-T\^xs3} ðFwo|qYYl#@6|֯v?2Fu$Wt6 iaЇQ[L|4|ݘtl'6j\uba.!i`إGL@{^U%-V>T#xaauח?|?.~w1Qo߀480 '$I^}fCSŶZo39qT[:}W[R({ԟuXz͖;Y i }Ͽd_U!=KTqp]R(@wi!gBZiK;CI\KyyGo.nRw=p q.`9VL׃!;kH)EВ©1ۧQ{nKYpԤhpm[&w3, ڌz^>kyҚSLdحAw|Tuj~ nt3B7%{a؛}ނD'%zlۖZ߇a |I!l܁{HnoMM-\yU&2_fjC]h֔v JɞR\^ik#5Zڂ- m|,R?+#G}8j%#:^X^PF:еܟ@WvH+'["-|ҏn>#[8W&ZШctT&LxX^Fߏ[^Py}t ci4Ʒ{]]s+,?g$|7*V%Im⛺Iv|]*|4d(!).W6CE%̓e= 8G: )(ǃYw6+ "2v:zfP[=IR&}YfY=k#76+"jK"뙝/uE/Epftz3y4eE-Rwix7a[r|[^\zv}og='2iu쮦M[#>6Jr c(T15r#asn`E-l>x1Y{\s +P.1JjҤ,WiLCbNf.e:aSiw}nd l)p#EdFx%[&f"gZFﭙ%zOy:`@ M$A>8(E(Zv̹߉TO=g.ޱS<>^+A5ttY-5 fOu_02"ӛFn®|UVsovO`,R`A(8 e1b'ByëW 7O~EKx ^}/ߌ~+S6s=b-0&k۬VpN~Q[f$ʧFZj?\Aןk0kpP %G )2#]C7z%ߋAat:647w z͕j,ނW3fۧbѿXj"j{lQfcbze*A!ix_?E= ̕'P~TvBĴi %S-`ڱ`eO;"!(hYDRSNYK1`he|Qx`-ɇX8d|9( 1 - * bmkѵf:CPB 74Zw\3|)˓nz\[Pů"^+!DhÒD MHށ7!r`bYs[!yx-pETJrs1HXZ\){f&Z"K`geO"EܪRIn]tEJ ;UfZP2d~֚9[1/oc{0QpetNBIkYDAQK}"P3Cd3{V e7z8J ʋ\IbZQrY!Q(?8'd .j-Պa:tP e9zD -S>D"1伲^g3lHx089Lקv;"K"=rCͳt$DcA ORZQAdYZ! Un)_/zv(94ۦ2j4.Uɡ {0.[V,m f%|׹G{q]<hm/=IŬ,ןȤ&U(3 R9,Hj6h=eeZya C2JEe*ki "jdL8%)Kv*ooI=N{].1(D:Y.&AGk"O,'Rȭ橔;ED0yJIp%&NNYy *Eړ<5sS'7& ~:LmP֑InbL-*terV"ĂYs";}8rU֤ $(e6kf<"J^C)2HEBUL޻qmm+ȼ""bDA=%01+,dﳣ5g8t [i~i!YʙC%EDu1 @N1 ӌA@ccS\hfv#ڋqJ-lJ6͠n;ˣ7 g!..j-K^euP*GZqtfN[Aj^!WR*,Qh;g%|zShhg1J]̀1$Rq%|:gU*ǸѲzʜ:v.q@b=5J:yT1\n\?fV5XTqR=Ί\^͎/a0 mClYuѫP1;nQ;eLBig ǐ4)FyͷmzQ: E9#?YInFq2_ۼuze&mZyͬEbRʘQ4 CmjPR+^wa/C#R dE)d8Z +Ɛ҃%(dTώB ];{֐+ղM=Q&oA<>-ݭ9VPbM"Z5s8+ZZ\Z$r&' INx+B ZYx <}BEy$@{ e9O<w0)l,ceݬ*ju0/7tasv#_GO 8vl #D@g&`Pi!*bg<5?@rYok|vġ$CUIQn7 #[ʗŁFwk =S.t) ]0v DDzC Yl%HuAhmVXnj[!(A|wk3-lOG" $~!/7.?oֵpu$([^^/餩9^N&T9)w NFEAބ@}om'}U':pW0/}׷,~h7?J2{?&ŏғA?W?_?#G1I] 49Aa8o|ɚf'IÇHcZR-h]z`:Nf('Iw$4yd\{# 7(~y3{d/Y:8O8gb1$p2]Lnّp$w+p]2XӯR_a)QםGZCGG9qו?nCC*W;S@e+R=AywSlwb ROSEb4lS 3TMhj8ީhvO7N`IbPp&rTD\kiu*׾3jsަcgibgu ]|iwcl#]vOv=iFn8޼ls{U!\XAKkU0! g%osJ!eNY\%Nw lfjG=]V75L"'A^\P?q2IS z2c>Jc`.ȖƒUOnP*, _^\O+_ o_Th4΢iC8K&'[AvR y:|+r!H<}* r˕zZ=C`\O?eo' [%pHI,oTɈrnN͇*_LZhRp<;<|3h1dvU΄0ai۾MȪ.T o2`TbV^9QIl}-qdzwͩ5 w]vCrԴ1 (_6^ wZh`dAX,(3bѿ!3GB$)*xmRI(Z<'Ga]=\ENٻFn$ .mQdCvM$N20%Œwba!Yg"Ub=UdO*J՜yeGfgeP^%ie8}D2`9CD 1bh#1M9RIM4&gfy*o_Y,5q dUQ3GMDgDyHf0Lm9pƛx U3M,XJ`km щ@L6~U2-;+^L-i*]{ŒnD<\鐻c3k3csF -w_ʢ`4)>q4*8rՕ0FVb~:܎'pvr<`-YM:s\HSEŜI=%g OX;qQq[ew)|W /+v_lh2}cG+]X`}^RFQͪB0YU] [myͪB-׬*T.w5L*Ɍq$c5m-9U-ǽK0AԱLџow9cEZMGiUq^119d0`{# _t}wWZ.JE'ޠRY)H\+x!sOUwP WoP\i*qUE/Pܺ`wqET\W WGR*DdP)3(vU!hqUP WoQ\Y͗3·嵋:zv7rYy|&3l)s*3|'<2+Tk**> ÎW84ܢfl_&we܊ɞ&P!"n~~~OHr8 zKv3Fnvi =]QAy8-a*yW]S= qٺ}sT`.Bs8x֏t uZ_] T0eS;/7D4j爛XnqE>X`~ 3cj0`, >w~5< W1W׿l×8Xr'4Ư&lѹ OqH ׶R) 4!*fe4Әy.*=S"Ths~$.*'?V4K[M&"y @fW@R^&!UN\Vd*$ 8;\ϖTi"K]]i"FGrҕO(/N/WȚXH2Iʸ$4iVJdB)&Xi-j^3:cgim2ۦ>O7r(ojYt||4-TrD$P= 4ߟԳ.W+H˓(5}{f4kݛ6/5vSsW}b[v OϦ++.9Hh ~*|p K{U#Ir$PtjzeWU>42Vb|7c/uӸrT֏:QWUTQ%fsHX.Ap5&{|yo8MyG՝ZtiO6?>茶~÷'O ߟ|[:@p% zg|57y4|hi`Y|qmS^1=!P,4k@R~r緃]<=_^O=aˍ$_@W׃؋f]TU\L[P.b-B !`KXu']dtK3V6yp9ot}͜77 Aӯ>qtdv 7&tk[s>yMT۬=z&lFS\-4 /L:ֹ_ȕhV\̊%ڧOA7qRKMgJ{9_!K k`mnjnIŢI(uFb%qȌxx263zabu.{ >Q~}FѽJ7ͧ?vg@ <ݎP"H4FIJg.`,L&7Hdd-ҿzeǶ[j'K[y'h7!aPX7 A* ^#1{\0/eJB `֐͖䙋KC!˥KȲ !ӄ0b0,UOǫJ,P5k(]ʋFBYO|τxpBrؘ /KF&Mt{-edŭp&8l`"5ICP<+Y,ȤȼRҳvV,Dɣ̒20N_rȍ8&@cT3s:u^m:U!XU"J^g7DȐf唕JdgmbBYӓH?N?_BR&fo1> "AIE#&:H^3#{WUBM+Mc& Bub0ky_{9)r+_蝎f˲h~¶F,AEola  ^ >B_$wV?=2ES4A:$#O 9{Jv嶏HFjt@`Y`>+nF)]z#E4BgՊHt#ϭ4';g/]~6m-^;mv=y"裍f`Їb:NʶUfU8Qiߜ[s^ 霔rQ&,Q`ЋŹ-XϞ-=@HFY J51ϓ &霹VYSq| gcBm{D8 oE}ҍiBNmf98~,zMm7аozB7'"6p|s1ѭ .. 6'br pYo#7?*&f ̢ uܢ:9'T8ց!]twV1HrdPYMgoY/@ %dV1syWFcsL'WD;w\=?[loC8#BsX+uwR5چH{XG~qhδ[U3%|fϠ~\H-בG!S)qI{]e\(h`&'47Nc< [ztV$q#Q_hz{8ԛN͜2H6&$ɂg\4QN#EE?g,={).Zhxr* $,pK JPX*p4Qٔr,}b/I9 "Hk0JtAQm:w z wN7)?!}kt[ lQi]`MMoG}T'0FZGaڠFG2BPT^K+Cv6am< O3/`oz4#+0tBB96g4{%!ǘfR7R;EmZT<C?_e8Nr<{7PC+>KF ~`!x&XtbY!"φDZ_M돐󈔾U֛b Խ@QnOpݐjݠۧ]ܗn`G5S’ʼdhiHL$0dʬ1 އ1`| ;BPI[״>\r{(1@p)X)Xݐ;ј^~`L5ơ_~_? ,Npu"U7~.Sѝͦi+k_^ap1^֠z=#Ȁ(xPx[}vs_&^M',>aY6!.[weWs6R|~.&`;C^T.0k?/ɢLt'8ieL?pb޽sP g[bQR6D lAr6 'BAFC=0M'HL'jR\6sx;'Yf:O8+W8dbY D9;ܞiJ?_ܙoqM?RҶI4| b{~;ùD /y6 jB@`gh Lpy+]ɟr sy%\\SV|=b4^G\S1–D?n6.w+~ۙ`'?|ǟ,vkRYv̌K)W nGIϮ>oA%1]2\qŤNm~c1;iH`:O׾Mɿ^c_[Ҍ ҈d yGNl>TTλj_k Tv>QW?ovⴰ(18mp-5d.T-xUR(0;`յr!]t6+ V~`߮Rt A1xEW:&Q(iʕ#٭򔋁IyOMy$r#z#+N&WՊޘnҭo(ҿK{j7k=Sg*£UϘl@myjx"j Շov|>6Z/s iA'rF)4(NBkK:UGYN>$%(L5Y{nrb" &BZ 'm=7Ka-u]"k#Sg>1*ACRĨbv&\I@p26;/}M1-\B8 {]}8M<ûz=ss#->\+?OO%e$1C(lJ H^czp%>a,Gey( bG@f.( I~z<U\^/[\}{ W=6Y۬!Q1(Hk b4;(D$\UƊT׭(o^H@M4lY>񶶾jc?boD9m!ȭQYLʠQ!}>lJY8>ce!xTz!a*jӹ#N}L`a>Z?_G^d\6cet(e:ER)%ULLzsSe7Ǥ^cR r4wjMgU}`ilzcXrQ(}Ari6m3%]1ow} 䲭cJ(6royE{6j-Aυ U!iЌs>D8;SX~RpZ$rRрZFlF!0CID} ULrp)FZA&#U2V~XT$㡶PIfo ^GyY(x˲`츮~ŧE*6?Հh|;B!AQ8)NQ, ̢u-va:k!Xjq,V[VG;qJiYAkr|yM߲H$97:0Cfd`BU{\´M*GQtɒĢ!!H6>Pj?Y:Ylt6_oDx48|<"Q;S$ #J~{d4W(Dn&C@WLKqDwF_9ϻrT^pZ}t+3@Cau)eۻD}1༙(k̳jя/Y90WxHz?GĻj4LK|zϿML~Ł; dYsx[O RBĀt|y?a^ޚ*?DinnWs8M^RM> CtWMjn%-l-/Ԍmˣ~2',1uk ^z[gpNa+ɹpF}Ʋ,~YrZ=h` 8,zɒ%?K&7+Bm3(mUu1ץl1~.Z8?y 9y(䮹J;~P'J)gx@rGqoU!}QWh?(-T 3TWҀ}(QWDfb_UxUR^]=CuV #uUc ]j캺*TBՕ6>`voU!W*>6t}"*^]=CuUzH}]˙ܟ BW;QKv]]*{cY+Q$k&X]%]l>(ao^_hSWOy,=`MmZjPykeÃƣ4>Cnw>W|hF.Fe@A@p8^4f_j8dtK S,rmm›Hfm QzJQM6k4#^eF)DJU$A'-+Z -'' J 7]Ļ5rhE:Cݗn92id|CB耀(~'}ɹ0Ch*El(ǘ lS<|r.AX<)YT,VD -,[|ѥ԰Y^r[܅ǣK Hp/gG"xM(: O/7J]$fB* ]rXv4e'FN`iJb,D 9Af2m(`Ί5rFlq~!Kۈ@&sưd93dd q)0ڂ 0`$"Ц\Qoxk 1H$p!,} Zs`=j jyo'vRx\9O!DR"0t^EiGa6E83"r֎☤eW];nչGyD,"ozD#W< Kfh ] i2֢GhӋLoXk·M YdIZmԞYmKw`Iz`䉂M0{Ulk?k)OjQw=PL?'kڊ/)>yKtM9ߨdbL*4t஋t;Ec=nP#6J;H*S2V8 B68F@mvX6 ֑5֓":$T-խqzq9:9.U6ڠudgJY"zCHqdʅTʥw%lmh7)QkI&(P[C q ! d-'eF୑emKJw$%&E6e]^J]\4LSB:I7:k lkJ,6!T0hX# 9d8D7K)2Fg粥=evʏ PYƨKa^ƺ$ceY̙^}@PNFS%!I2EFV%8yнuH)==dُKlB[gPz(7$/_[rEYuK+W[P,rЧ϶Fߣ<ӝG;6@HDT1I ` [#*Fi5\CMRrZuhϡԦ=BhkDwE'm2ҜW1Ù`ܫ剉=(F32BfV 6VM;Xh@kI-j0 GG4[]̣ē#AHJ"TF g Sq4LCniO. +Cc^+Gg uE5 YJr<&_:f#5Bάg'a5r2bwF9;t,$z&83yH%{ӖtNbbJ$ RK2FzgIH rcD?0C]AoPh~w5Mٕͅo l`gof++je$h"qҞ dlHBm #]l6#aYaLQh*|4@nJI! lZ;nvaUQVvî]@We41е:H땐cL_t5'Fk8Jbs|ve 2N"Ν#'GK;v\'4m3+(YE->jZz"(P Y]V:ĬKm)%N=")REf%'s-< LSy8{n/l맦v-nCxXCpy=l.m(\~0Kn(r=㧦3nu7 rə.)rhIvǶ mzfߓ?v_dhuC/EvtzfwiwZ6ԲFnzm3wWZndKˣCny1?"n__1(ó:~׾W%fh./thRnk;?k]匹a7u({#;o|6]EHwn2fY=ଗ|WP"UErWZo1r:z}ح-z[HB(%]R*f] Y@%L&oDP0 !3-M~ܶ ;mU λv"T\ ~Wmw1Je2 c$d/Љ 楓^h!Gd2} j#Awv,7>.6@ۻ08z޼vQljz~uEr=NJ#(|DQ.zxc;'TUe酁j"ײQ&(/i,Ʋ|ox7pBr҄ÙȽDA,1Жe™謲}|7ִ'=<&2t4+Y,JT60YIc!Id4+KhFOކHL)02X烗1C9R]jlre9J?O"+¶~u4L%/IN6k=¶Fc}^HA%olT`!e !:ՇC==ܝ2[?|ʉ$+ړ7A:yLQprADVL,0os VDUmUԚ !Re*t5#0~1HT+*#jF287dv46^ wJN,?}bErL^N-Z? 畠.}h:MF oKE,>j;Q\ hHgTNoB]/Á)q\[eh՞0 8f@Rsу$ΙUgh齳2?>gYpĽ"2 E;9\uArsXx۶ CWzV([xv>i3*l_? 8N@Lmh8ؠH(&=;L0>`:+w@akO)DI` q":U0YEw:%c6"AD.yJRxmWe,tsBZOen UŬ)/i>{>|n* |>z:z+ǟE LBVWɷU5 Y`B`RcV nsGuq.jSX)Q/cnd `(TO_J_!8A2(AxJe& ޒhU F2"Pu˼85HmޢLtePݡd9 p5?_Y|IgQyM"iv q)s* 6qC#*2eA{,dn[\b}I=*8:|cvIRZA@J 585x-zz\l󦌙coVzmLD}f_{0W,'+T'g3Gr`^h.knTVFۘz/oя||/H&ݾA7vEenN<ގƒ2P{]e\ с4Nxi %x7on'wZԏ0n(5X$,m//G([75Ý͜ V2eT46&%hD,5i T G`xoÀܘp(rTb)dB(, b $)@]$e13Pv+&^BrJAxEEn# !ItT; 8{mGXo;'$GyFH>QfkrVېVPl]lDTw-5zƟBz4˗}za,uiNFQ)SaOd!W,6CLЛ/2`Pl,3Ӟ[swkDZO./y8 ??rJDON긿=)4*yM҉q7x18qy0e_v9?=ۿ8p3~]V}^ލ^"'9& ml>K\|}POg4>N0Cۿ^?~\DQ+dG8s_ ] >^] aߔ,Q92e]Dχ?&Jφ![O?￑=C ჴAv%xn<s`8ofE/&Z݃U&=]l$o7>]]a] ;э7>;qS`Ο؟b*_ ෺-[H3HO]1f@٠;Syǽx(t90>X'6FFM/TSooq8Ϋl{]/Rצ"+w:G@9[$ֻؕ{fَ>9ܕ.d"P"ipuvQOy5[ 7k((\o9r;aţNř"D~=ݝ[Μyxv}'t#HK.s*7]VBN6 Eu3zi0a ,_>;8#*mU q*3Y&5jyQ " @JlA}T?zȈx Rg`zWY3(66Tnk=}2.ɂAd ,3!3IPCD_E6q:9OEW7vo^mO/ifWruA*+x#i1ZCNygZXɁ~ݺv84մNÆ&/M{Yo{6\ aP-ZHTplr`F ,4$ .{k )ޱFi6ѕkpGTEmD-[!L|Bh[V;:U1W&{,sqXpB+\o3x1xFa*jI2dS}/ۈKEms4XV9J6AkZb:fkt+E<9Q&rC}BwA$PER+d?MIFj\gٴݰ XM>j(bQ.f<^_NLI Иy ] .wW=S>GU-q6Ӂ&D 1H.9m!Hr&w p#ZLFe&nXTFSm8xGy7*xò`̺z7Ѡ h8s8*PDPsNJnSg*5"Y lQ颰<eYm(^ANnl=Prs+ Qyr1d]e]M;Nưbծ6:jjv+N ˒6E֘+hkFvX*> l``qh-K^Ic簻u DnCx6/"Νf)wPGY$n)]lLqJ8Gp.<44w·+SE ƭѱsC̈vψ{Fq+M ]6+9z*2Umlmiء>݊7 o<\s \sG!'m$sq68w2\Kxvqz6u%Eeϋ{^)}c(.Fi PJ+:ҡR-1yslP>t3{~ H1_yjN| EW(5o4Qg7g7<:~,X:y}rXk`y~/~ĕzloONx`>;O04v?m O(lE{Qq(b d.Ç1<[Ɂ\V;^x Ӹ0d0Dy߻Nxu'_vˋkڥQv,~\_Z}땈Y敟VD{qxua|ٚ|p7?~Xo>|A{ ǬW'ON>=|HzI3f}}x :uyY'6sKw.EMYyuZmٟi{6olںټE۳y+gLdh֐X7XBUOjI5&\~RU98V cP JZbТjM1WIp&$EVsܥ抗%7߾󡗽oQyӢ!o}fZMQ"fo^|eU.Z,>nܜtZ{Wm]+r^xa@큱3otSdQ10m]/H^ &>vQ.hPgt\\p< z#M=by'ޞ/߼)cS ]=[<1d )Z)?˘8xٛUm0AX{oa'kXR1҈-y-bءu.!2Qwf p®s ۾5P߯s=u.%U;DWtt5zh/˯P=]=Cr7;DW,w\gwڰt5PF3+? k+w\OBW@+f@ɲgHWFCt.\;kWm׮#]Ev] 3t5ІWW@)t++1{#OLW+Ay4tu/Avѕ܃dOWz !}p~]-Qv(3+g+dw\wZj aOWϐ,y,X>zEXnFy>z^V, ^=wjgߺ]6;񱛥6/%E(z-,_<&LOgҖNY;ꏉX1#D԰]o^x)e*'m%cWڳ@7xg'G6dKsWAH/qqTf}]&YsD48Xo+w ;\3 O~II磂_OWƝ*6U.2B.?&wsmOU`JOާVm&!.{n"PO'Tu}whb0lB#O͹xwޜ@cV.k*uJ9ӐJJ6`s )'6TG@1)9DҎMѵHIZԒBGZGg$f̷ ҋIShJ:'Xr!xQV< Bb ![=ChL|"GvּXl)6>*CaӍkƧ68[_c!cfC$XgPgFv4ڐ]j0MMȗBiؠ⢀)+>#<bm>h-a]ǥmlVAY~0P$PyUuѕYɠ-]-.5!1ѕ<q%YgeеT,0 5 5m0\)aIF=Ҽ1 k9x(P(jٻV$V*BRO\X^b ~N ҵZc•$`ԔH2l`W`\d) @pPk@ou4+*әPmK d,ʼ޶X U܁p 7CAwVrEP7(c:o5a iNBAYPњ=4v!ץ;-l eJ ߚ3{$$XYu0wL  R|"M|)q$88)ԙJ|K@4`uF 9oL6vr/ *A l*31PHq Ls$eIp5/RX6[GoaH=[]+7aZD5=() E>R]'M f1r 9qWM !ѿ%9o ʤDT,9&dY."2Db۽Aȡy \+ =sA |fp^-aPN e/fĥ&.B ǘ@QEb iD#NQ&`s7` r켝./*\c /xU ݺZ01^V L} h"X 3 o b9PT8xiPtd)(CҕjɂJ2BCTXrKf  A9AJ$rDN [ 5tqxѼ<0{KH`GeN֖H#38hGp&pu~V% 9ՠ4E*杚&7YKVAb;> #m6^?=9:¿7giLRrS,sKе 2"0XB0%N,zD/G:@8{]8Hј S{X3͚%4CܖSlK2fhNJ@A WR(-f)2D[:fԄ[v#h, σ 1`X!Xc~, #tf% L1c4$C VASp2΃ì2,L$qt3%V+E|hOt,Y{نI5e547uJIWo4^pIQf  @,y F `f|1U 5< csw^y-_t9;\h]r$8Z0ztqtӮl#FOƖ=F`{BNdEŨЭ5HK9OՓFCoo1F:? oUfĞ8V c  E!9sE6* v#bnsvI,WRSLw[z,aH(\bFVO(@)w7.͛Ʈ(XB|⏮;"0#R'7kn wPYy/a5ARP0[T1"@ZM>jX{Y;7u l5?/Bv)`v6=`z:N@wȒG%96plٖ-Ǫ&@b(!yyx/"a7Ѧ*R9*iBoKv,FphYtE t,?"S>%7,n!Jñt'^BOn 2#`@ 94" b7vj.:8pD]jl l`uDr&!p ^bp0[5:@8/S  ND40(B  \5Rr"q5Yd֦wLK\rZd>{AIktX]%}P|?jR77×m]/Ҭ8w!ixV49ץ@ z}:1%o0:*d>)4=R\zWJP tJ F%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J Tp@9qI U@(&Ouv(*R d pB%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J Tw@R%P+H_@(@%*(8@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P tJ I {%PμQZ^ (A%)Ң@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P t8J[S5{_<{;ΩMz}ZO^vba2 ΋> ?%tV.Ze^&c(\:7C7=+6ҽ ؾU=]ek"]]qJ_~;e3R/Vrәr ko-'0kA@S}y1ڢXFSYRVR[]^f%qo7tM= GMCIcEO4sl#1Lu>G` m$``>1LP(*<)PeO wyZMZ|nRߖb2@Eixvu¨?<@Y5UU4/Ht2YL?sI=2`z7dRohw!7 $}:X+nzCWͮLF>(F:@RC#ʀ6Ǯ vUF޻(J)"Yye?M0hUNWN"ЕV\>yWX}wpWt޼[4t*T(E ]`CW}+@UFp*˨]e?{WShog ǡ+uǡWGD3ɟ~2\Vhy"%۳+])=H#Uvhw(!F#ʀfЮ*õ+@w(B:@┮$/i؂؁0&aDQ\E_',D 0d"Gek)^*iˏA [;qxRφa')|*`_eaY{TaT\QΦgjvJ̚A=`r-BJ"yLTTĖ^z[V,D0y(۵s96jI0XZ-EaN&y ؃|i9Yl~ş)9ΪҺX*WX| (5q}2Zi=( 8M}+a&P//t*tQjtut%2'm?Xدx8]e8BWR-2 ׽ Zt(/ҕBS#ʀ ]e75D;]eHWIW&ӧ`02\ lB~=Ĕ^>_L y꽫'j -OվJg{Wz HWzLUIo*5/thNW%AU>$2`*õ/th#NW%HWHWKF{CW.7}V}rD]1*;M BJpzDd>: QEk;{sҤi#*M4- Vݴa9hPPprXkb>G3agnNվ{֜B nnu.|JH,ZsY`Mi5>e4JR75j7Xn2J2:^%ݱM,ozz35˅]~*tHSᦋ;X pNRz_۠N|ʫZx>=_\v#y߫ iW:FZ^^]|Nn6 'NF4}klfA:qoɷX_Z^{Ӫia9㓧@'e &I++S3Wa tsxsw .S$∔0U8 &D85ܡGXWS OD1U^'1K??vkfXԻPJY- xn \ YXR^/Wwn:Y}}ϳ[k|24N,/[s݋5lQT}O٨>^(QX{_[4w/ѕ i=%m$F8-zds|}mpCz^'Nn\5%n&͈jF7g|N0'oXLyWIAjedgH)Je$B2:Ťe x&IDH=ŋ[ΣL%/EUlXlY Cp4(~O_Ӝ߁7\zOoWR@v>Fl' Umj"le2ݬ|{ $h.־<'c(6v`_77PIrWc~iT\ )5%;ۥ˻TUH]twŊ*&sj~M~ؘ5u'ĿTm@%w__ tmy|=L9Sn Ssxu=D]Vڡظe 8;6>J >2$FJ?q8R2ʠLjKWݚ*r@>r7hݰbk@~Os~u~C7we6ɝ7ׅkWTMg}/{:;qmhW'v"k mcI5gKaZ[:Oz[J&e;֍M[P]=B:QEn46-ϊFS~ݛw\Ӿ7TZBw HQa8 kh;Jo$ˈ_?oa=]r?)^k"N)ySMC2?TJ˱V[y2w:HTݿۅGkB9a"Z G a7J5U>E)xPL܅_$=7=߈5Y?֙mZ a(OŲ a֫oP|h(#r,C(= Jt*9yZj%X^"Xv?s;f" #h 9Tb2Cl3l)JШX~C_OkN G)-+-6+ 8ZQczf;hrMS^MY_ÀI9 % 34ڋ1,.cг}gv:B'eKn]뮁7:RUMtx:V-b 6C:cÔؠEP)m"1I0ُd~B@.8~i|k^~|A6D~XeYWCA.:'sǿywo_|zev?{ܿ9J`>&B>_?;,bf?>=:>f_͗/<ǹ*UL"x&&#otXph̵; ڨV{-'/k@.ȱbsuQuC,.C'4CO2P^&j+buv6I͹ւ["ϟ1O1jJAy?|~%7 %V :DbnohU#`/Scb,춞^hQfP](*D.!cq^*%Ar<&"X^%-bzgۆzazN͇Nn#4`%&T [3d0aQ"䐼mX[I64&)JVFY~4+[xY3}CK V4jcLQIL\^k ":$.]vzEݬIÖ4^#:'8P#]iM0`ToS:۽^XMH|F&jx1 2֩/bNZ˜1b;ʉ>f$'Yȉ,Z-:K)+דE@ dK/ɃdC5s,F@,Csdz$qZ2sbYߪ͍s9nta('ڙT蝋w\moHYH$8j4${@zk3 RV`7a $*- lJv AݮmPYm2zj9U_,gy] au>,:>CxqsٶTSowy/ɛu >zɻc{w-o)Ȭ|RTdh(Z2POpe N4ehflzHT$.1(T(`m5RkJF}6@nFgaόʰ\83ͅn&ݥ\xX.|玌L{\O;/?gUTO?|~~lugb"%VGۘg`6VN⛉ءmgbYO>NruJr)RҴ@FREePj@RZ,[G?cEݠsüc.Y-Y{A;qJMO-W9)y lr3B9|B\eȊ -(.\sUL D̵jj5VK0gr!(Ѣ*E-^upFfj5/\0/96/y]T]Sl'*ޯkс.d,S&&YKŹaql>a<.@ {]Vnz;&&v~ܢ~w(<.we('rR7z͢ߟîzH1g!'Y_3CL0_~؈Y__OWB&T1F2]j{ᘦ sv,S&&|SN٥ w ^d[d:ꉽ{?ʖhjUnpR>9vF!(Tb Edi)KS?Ǘxn+v_]~{>~= k^OHXls% $ϫKd -4EX,h$,_}@]WBy-=~!Fքivcβf񖅻bH 箊j uU>"@-8i`r/$hɹRwqҗclO^o1-7NeQ>ԖيS FVQ X`!OYZ:R,0su\}lw9~X:$ЪOdzf+*)kpɮ*AL6CK /%}Rd"d)uUj IoK/])9_*[0UnwOlUM" dUGE1PJ2P ؘXՆb&慣I+|5jO|jOgGtx?>lF$Y.B.E,YlԧmOJ<α͵(Xi.6pFDm|-S_=\dnwD$L]&a'I:bc3=Hh( j:0s3Y"e>R[樅dYYvGdV 6^8KP|C$}o.=ζ[_e7^lG3\'v}qq8kM~=g`9R%k ‰!9<<+e)*|2_l/rkhEj0E_v(ݯچ?MBc5X4X5eI,҅U}M+3Hjc*JnR ]2G_(fmQ 'A3a&X\N(St\y_yc5M.ΝmPXW9v;l1 [iiY r֐{zA8vq~'-b母,nh5HWU%Pr F)Q4gU)=zj3J/?B I४%-,.XYȂU %K&FB)Q~TkϴԑۑPXg^$Z5^O'1qVN;/T48%\n.X==OpV͟7c I32 i'ʛE2pf7Sb:}lOJ%ߛe,tueQ)0,~NVQ{`}9!<|o3sE9e%-jTCyi4F0._j IʕPT kRgƭ?Tg\3jG~zM0+j_gE6շhd>[V۳/R*v$U\7\}).ؕNJw8[Sm7ҼhyukgeΫ7/nga$8v?N wd7$Ц?ސOOgmwz띤zN݆or[oT^c7tj>Az4ϦVz~{ɇ ֻr廮 [˪Qi{9!tcUE_}3Gj2 #f>SK/|6&Y5|oW/gWg_>nNyzsuOW80"~ yn_?v<ԭ~[ +w9d|Cy}u4jC;#kہxxib2_IYl3gy)uV>?,%̳j-wQ)XSUVB,Z3e5YN w#ݵ%:Iiw2pr yȐ\kpUya8Zz 5ZPj-';ఒ7YτQVFҳ9-0WbrV[_Vh|5?ڧy 6d?fvXs[fUZ:fKw˝<[ p6;WjA][OʫbG1~fW6Ɂ+gK`?\'|^b?Ovu~\.7/Z2ГlBUmY;YzZb7e1w4 4򨂥/Z%K.~b8{׏UL[Z6,\KCw/n' Vz7l\\M  ӟgE,栳6:~Fݞ8ʰ5f[^^Pu9=ܯƐzROqDTYk 7{_*5BJY9 DG(UY <չ3J敥9hc)m<@nɫuwg'ﲛiEaVvŋy,}dxjp#=E!,k hU\E]ɪ蹨 :qEi˭nd ;^ymg#I5m7>##*x[hew?9Nȶ_oo yuW؊ۏN`f[aNՕr{ Ҝ/{ ҭ/nP㢢xz P<<@)L*:vҸB2ӹ+&#劦&Q_`CZ*^JVVrKFgFWUdF`,~,-Å"X;00 w1X;*z(I-7ʙvNY,BG#WD$JÒ\ P%c+F#W{绗\QZ꫑+cS׿3 o U7\N~aɕ W6cVG$W,F@,rEֆ.WHKr5@ %hpEP{7J!ʕtB,) ~,1r͈;R8ן~|v\,i{6/u9s=]mJȺaYL sl귮D:.jZ|VrJ) "? QO[Y K\w^!c) ` 2ڊl~ry--a`)__sWg'Ҫre\^ W$/{ղ6!jyżeM^5kL555mUɻ$1=uL'VWE''Z|)FuD$WRZ碑+&"ڧ(Mr5@Rͳ+E#W+d,rEOmk/WDi\ QR\$Wl<ҌXh%.WDxʕ."B`)+U,"Z|(Kr5@w r n< *\I(WNv} hS{Q:꫑+cջV>B0r n n&ؕ W.c #+V:"\\!-g\%$W+n."B` XW!"Z!B+ɺ\ TMDrN3EO2gh ]$W+1 hp]4SH@.WDqdcȕӌAL+ ffi%O ;V=1-Dr \7\sB(;\Xrœ\=Akc+v2B\-Y,rF(Mr5@ &#+6&B\ɢ+6t"Je\ Ph.]pcR2jkzIrsLNG+giڰS6MD"tjb#g%s:5YY&a-*ysTɁA""OƓC\L,r'G"I $Wp-Dx"ZQZjrsGXt8R1͢+; ^2\E咹䊀Ujl?OC9롂9OJiB%+RIIuiRox )ۊӎ fUVi)79+MQ9/WK*/48W55^ʁ}!fMv7'%}s7'?DWwU1lvϢz?M3ƫKo>kԑ96͗7Z?gm#i/?wn[ ߶CYȍ*3Bʔԥáɽwkۦ-hz3gRrɩ_cΞxq-2잝pߛ]77 ϮN_a-WWYNGsK&Lm1[\MBnj\s\)v q& )uhZ\E4v""B`#+U2"ھHQ:jre@`Yb@$ZmC+ -!Q:+1jA`QpE4৉ҦExC+AVQh #f[WDɓuȕܱqwoYڞf;QJ̠ W2cGWXFWXh wt< ]Lr1YWD,rE.WDiu5D`R811S9WtkCrsmЇt*ݨh;_l#T8?0Sͺnو"Lm~Z+=Wpic J8xr VnPɓ''b\E$W|%Wku,rLrEe$:\)adx+G??{F] !2q%``2l|HVe1}O5)c{l20`Idtս]^ ]YZ#;hc*HWtptDtU:Zm:Jf%U7xcj;`c:\{4K;_u|(czt(yLw;/8õt,tѺvQztweX^3Ay )@c7_\.n0[(Cۼ}ٯ^_qԟ5E5c^=avͰBC#?>}S<0V'aaɐfQ.v=C]oߛfgeߚ׳{'r䣟8&ί|)]_NrpxF񇻗(:&h(T{G~do=WuN%{^F I;nY'8v򫲥Tx;w˴WwcKg6UmSoif6y'9/"bY _Y=.}KaCk؍F fX}KCUh XJ#;t(aztSpy>pji:CF7/BWZwr.Ui:udHQ3=>ot3Od6;7=# MڜG7tFnh9^n +E8+Jp,t*:t( +;*᫻U~gҕ A>]9!̋z{ 5Z6|;b1,ݸWqn79˛7oA;wzaS{ F=l#k;BIgldc =oX,s\ǴO"_> 6l_Vq1^/W݆`-m}1n;Ê9b>_ߎ]=^~?61Rz%pi&nm"Sw=ۜ,C(M6>%T'7Pͧ /@$Q><0?ޟ0,#yxB?al>?_yhVG&'L'ϑPOۿK+ u^W|5Gb~g)Ԩ7cޔoUZK4$m dHU&meY2j[޻~ ]o!)Gab骖q݁o~o֛.WRKѪe"fK[rMNVym)ddF4A oڛO:/b"%$'12fE,Me\L\jI͉bcalWKАrs 9#5Q&tI%#a@(Z ^ NkE*/|2Z0K:J(5[t &YE==7KHXj͘\5#j4:AI(*)V!ZIFRKA( 1M)+Иd06:eZIPt$)SՇSºl3 <=7rk9eYZHs Ӝv!B҈&UBE%Sj*0@?)WL4XZ\$gqdW{! πȨ Ј "'E\1ן,U6gڲd<%C(d[5~:'erZΛdϪ,U%X3(%9ʂM/ dJ :r(|4 }2VM`r9¯P#Z)dECZ備)PR@(\2 Teʦ JF"zҺuړ(.z1D v%d)1j0frYD.IGV JBuX%ZJEu)I6$2](! @`)K-OnEAE E'+9+Cۆu\qx(/HX56҇ Ɛ'-s#ȭ&t9E"uR6 CKȆjH,8R`^T ×Td2jnu›!q*pt/=]TBNU~T}%*U U l+ɒ3/$S`U0~Lv?nr[½󨓹hOYsO_b=^EDB.`Ajy1i&! p9X]%B_4|PGt3 } $$B? jU$\h B;ڱ ,P<  &vRm>I5(3ZMV7t-AσtzDEȒ,B\kڍEQIIbD"~4"$?<B=zw7Y zZ|,:'a'DhAV8K$@ A369M@J6?@_!td1k(FJЙ؅l J&MVD鹿.6 2ߤn$,B S5/Vp4U ~cȔZL/mmM`A[+;j |^ _|M{};ε3^`0u0 4tlѣй1k0gBɠbq֦bXcV^kA#(QZthћAI3EȯgзTfU(k8ݐh!/Qk CC*2tyD؇ܠvzҤD˱ΨTP=`EheBCJ@R9$TiSA)7 !kM[z3l:l V"b%*cIB\$\!Aɟ"wQ^0b z[cԔ*Q H 3^AN l?f^cOoWrY@o{4,oֿiݻ:oc8hD8iX!;\LƳgI'L\Pۏ .A貰j+-XO`XuVbFяVw?.1#4(^V|@ֿ]0Ot7>|#=@ -WuwڎŁA^:{-qv|uFҳwf%@1d6~;QnALP+=6 eM{>\LoEPa A9A*M'<$1KoI%Fɼj(71{2jd Im3%BOYE.Fj=n ۨ% ޣ&N@H@1dK"W,>7y'WKO}(Wyr6m JGOlT!* ]!\FmW誠'/(ҕ,(!* t \ۙ}rD˙n;]tJ,fCtAUUtUО骠=]2T:DWꎺ*p t ^]KT)%u5V{+:  ^]@tz}I) < .;3xZqѕ>tOW=c\0!* ]*jW誠t(!]qҚVw \ ]Ҵ JիHW((3XBv,ho;]=]G׶[v^Pqu TPہC-]MaJC۠FZN,4S9 q$YKH} l-a`Śu4Zת"Ţâbn=VyD_ǵHy=ˋy.}qff=*O׻u A\L3'W ȤJ,-??.p=Z-.*-eY}XMǿZՓGe {nvq ;oC\ءkݺ2OVp~~QmoOfOI#=WOs~ `?յZ0ǫ+TYZ$.$HQ/Tp^JBj)ͬ&ewh揬̼s6QX>i୍<!Lĝ71Zo } łRb%{Ok\+t93 ?wEͥz,>TsԟÇT4bwͪ'ԪC}3XW~RigZ>US' x>j.ӫf#EmF\ gaM?BXZs4@ .7@**;Flq<~IWa(_,_u:]iZ=?%' %WbBq ^pG\ߒG8:82 r{J|E1AKnCTLM*D!#AA, `e6j- W(B<(B)(7,8BND'6hy qTUaE݂?5͔~Mi/{6pZbjZ+tb$JI.9(Sqrꍷ]ġYr6O-M'7spG^}7goy΅\#Ļ@P^|E# d^ ̓ȑ8ewnN>a8pz}}s5hCιlD3}ܜ%\TYȉU6(IG'JùN;4RWUjUEyT3>&EVWOq<9#3@Oa&Nm[Iػp›Dj7ae˯[LK%cw{5EOxSX"V9 <'8* S*1i=ܿ(M&n0rnNebTqNo2壘GEM+u*@{>QŚWen:ׂڢ+k39]kƛwAE?s I 9D? [Q%#$*K 8g ^pt |%r(iCy$_y}2L3\2"*Ztw0:JEޣjT<\JJfnٔcp[4F392Ja=_neLjg/V{y1$e+iI;()sZSIGp>]K .ZB 6$Y78x`!/y` M 6eCMד*J ei>٤ϯyrX4c7Fɷ`j}2|㜜UM.ROS{˛&:erO 2 99Ks8 X޷[3'rg2N22:XK%eLDK\w0Hh)]JQ1%c\Fj236UfƱ\ț\x*.\!*}=e%d~g#q04,B7? /t3v` b,95~&h&A,rQ*a<,iV #2c(Ξ\Q)!mStsDm jl`XtSA18/3bl k76;ڼg^)Y$E-y)eʁGo˟)Yp{%p+f>SSQ>Ri CFe FR9 |2ҀNu5̇y+~Y5:q,#Q3).IS!ڷS ".0-9W<2q!B\ce٠2kJI@Pi& 胁bZ$zpA({fq&P{)YW:%lŞwx]p2OeJޚ HÍ-J%$gT%GBH=/‡͎cP5̇8><2=mnkm:Kwn`'^8rFp[ dFQ<G?)z5U56|i"{6Ibo\(H}yYyy+gLd^'@A =E53Q:$HmDܖftvFfʫxg`WN35ZDh)Bx΢u18km?Q?^.X5ꦋ/èZFnHO}f2w r՞R @'I[wn۶hw ip#<oR-;Ja8ާḬM]6~!j!W,}o1Eܑ*x۠eĬC{m}wpNk84+RnUiq{!Um}b+(-`0{soTѸf6bTtpc^NG&= mц mņiOu,Ja:p=Xe56ƛպb%aikF/6~\,/4Xok;MN쬠#MVoa!_ֲ=/w<>FWmӬ\HZx\s>L-޵_y9בGheÞ>M ݓ^W+^/o9i͵_!g <7?on5FFupkxW #ZKm  7L5vyT OCr?tD4ب^&gW "lcf}m62hw!k]QODGai~QYAAc1"財&2'@IdR2$ |ҧ>EE!2q|W˫b3]9w 5Ѩ&qd1TsT(z)˙&Υd8{ƍ+|;0ri\+C[,z$ $ڲd9K gP:= tIo|_s`ZG-f+  RR=Im-SzQ3Q+̆IK=:kM)8"XD& Ϋ"h0N& QaI9#^GqT5p󯦽XE *F%Tj-q\#ŀ"-A@=uhA(}0n Eq5A FK6IPǍO&1q7$jPɣ;X cU3 ّ:D>i>XGdb R5s}SJ%HdWCBlH&ٽUGi\Rb[O@K*RT 㸤X{~]R%EFgL)1 dD4ܑ(mBj$GR(ƅfUFߢnNI kUCm9K'G 4rzb,|ڱ8[`n\/AxuWg!mUVlJ2NT #Y;Ip13A0ցGcJ^+ 'Jec6$)Kd22WOGGAwh?AQdJg34% GN O8%&  C QBOH.."H3X D/Mk45eCImc~57&hZ?0FI1Opbn &{j?ޅwS& l}?On-%! WCq]mˆ(V>3ͩo5d aPN%9:ӊsÑN?Mg/`h˦m(NdE@xwR lW7~*74f|Jrup>T6cq*VҬx/:>^AnB|E'xHx.Fb\/6xVӕ(U'5{6῕ꧺe|>;o*FPKV|ppv^,b AD0U5^ri0z&i֙(Ԑ引B;˓" O<oNr Pz˖ՙY;| F Hu.T̑KEXA b71-| ux[uuY^zo&SI[E6I,B`a0v 5ʠ@)boN5h8h^{ 'H]nw gK/}xŪuq\wJuؔ8pg#\Aj]VjdBQ&ԁKͮ21`$VKQ/uE1'G:lʚt<[Ň켘Ak:I;_ (T7./VV wCi GcQ9"4' ~ ڨ<AQ@zQ*rV7$kzHtoϡX 0V Ccq9˧d*i(ֿ\9Smyn40j_2lw V Io/)~pٚ2/as' #S? 0ΖuE7EtpVVGi/R=H%2t.R]pTYN^{ޔ>{g7S9:xKy,eCiO%O 0!zaj?5$Ɂӊ|'X!3FF&IQ(Ն(Wq{`8-bܓnݭVCrI;]]lݶٴBk]XU%eDݧ+ ,8 ` F'eӼK?AObxmlX/<~?{R.Ϟ'/=*d *鹑8l륉#3W=fRk.sZSZE)D[ ^_kG>Y!{<^+=~/?A_|x$ 48=H/!$K H+ k,RFB 6̷3_u(`ɺm#Xp7S٢wʀؔV[G]d fә ںi@ n*BԂvML9^FFG9^8BC+&ka c@5& )@@,M8Ax .v+*P-W!F!H##9%FԽD tT \1c GsR[z~|8.yw`ۇ}ޗ9zq207,UA22\BE˜B> 4x txƓyYO+9ϗy8{9\!AӜy!Dd=q\5Pqydq݅akaxȜ㳧z8올:hQwD,a$R @1 IG}B6=wF򀈾"uѢf?0tPnc^{ّt\;}:`qd؁%#z콞53AF]' "9ѱ {+ڸ %Cgesi pӃ4)MD!pvڸ\ :)VSy xTSW'~L k1uKl|yi:;Bf1;-R<q oIyIq]a8h:vNEw:qyOW=\;ȣi!GfSMr?j4+'xw'zR'XE>F6'` /0_??E(NA,=+^L/姥{-M?k\KY:aQ''W\/ZqV+hu*6]%X 9r7KǪZWMv:$\|WQK7أrrWU6?' ,\j*L8oAS HQaIh],ވ9`VO3]fJV"77+[-MlFownCm]^M_<3rdwi/Tk@ g~tT84b'@W7#O|)?Z=&nzx|/)A5=z9c?Y ~"67){)- 3E| 4 U[ j+wgo@[c'3@M=HEұjzݍK;>q;mFG״F3 }e޸ӭqy х=jiVnqrmbK\;XN%7FԦkk(?1hdNVvN3_QGm8 |簳 +C4$MQ/,Eў9^pocpeѩu2tVT6pclkφir8A5E)煸iOAst>p ڿ!Nfygٵ^y$FR&WaCp-UO5u2"2$SKA)J7PH]ٟ^3 d v=ӻT\^ `#R27Qq0ٻ8+l`jȌH>^?/)ѢH.Y;jJjvT=KtUQ'LTrNg+h4!`g v5xM'l?H>dUB*Y]T^_Vg[Qiy탵]%"Hb>\jA QfպZ')AB]ƒ*͜xŧ<˔%&E/^E!nD2Ug V @ 'eS`y`ŽSgLB15}>(@ΨXKYzWHJY:3BZl|m/ο^{Nn2da(<30睤6z=ՔC.?c3CQN9B~0DvҐb2gJ0>zux/WI޿lwV +tlBu*X%+UQQU'A"2 f 8wW&c[5^A6D.:bP=Fet դ$&%e.͜LKz"5[ҭ?{~?Ή7Fm}{#Qɜ>`q~ʫJ{Cr@. . [r\ԃ,u\ C5Τ &!̆fCCTUcH׉Tݼ@Bo`dpBr'-1h)(,1]vPm!J{gC =U۪,7,~~}O!?[llY.Q&[^w}6XgHT/HݿULT.ÇԁǙ…áH2Uf <փJV=L3x3-ts @[1΀ї: e6Uo,J4Ϡw_ p'Zя p\-6{]-ή)oצ[/cC1`w)_ v.&kg.w a]r|cp;_I\yeŲ-JnqmÜ5Q\㟮ɗ or΂}r[zқok9ˋ,?Trs}IVݯ_^}W_QGˤ_slߗGףQ[_6%_|$xgi˗GO<֣F\^=jj|*pլf% W? \@ fdઙ UYIf5 ~2puTYK4u+Q3\pVvH"^.[TͫϊeKA--Bl uPiB0^6Y>^vqIalY8/7PV/#_㣍'B#@RJfkCr8t,ȊJj1@HMV1uS+}KJ6?6e}C4rp/R?u 'oxݽ6ꢋ,?~qwxmu[~DFho]C =!_Rى, KM@eV הfVy *;ӓh 8ȖBeG'bzf1hZsBg\1Y4bYD2ٜ;ktCkt b*cJzZ*DT9BTz)2`n>15i7=Пn=lcsOϞ3颟VC fαcY&4tm+-tjzgʻP|PD,CPH5;f>RSN(>lȶdKN01ik4C.s;ckDGt勇Iu%TSD%RMU֢cdzu3gVȞLz+_~o%&P%M:"*ǙcIQ\KUځ;2P>Q'%j:%:3$ 4v}r%;j7sv*u@etGɋOS$c始mPr=|ݮe#>y5>TM'g[?rцkoT'3'*Ml$%\9UYVJ^ҦM$gpe>5!wEFOXKie&≊$##a!I(rZ%]⵷Qk HFnF~\vCLdꌅaڢۮ7feXgۭW!<=Y\=sΚS0PJ{ht1d@L@t2!c ۪dPW3}%[A.CKb1&#LI`z̒z(hBAɌP  i1WzooenNge#Ŝ jw󎩠Q{fj٫ T95B+zUOފ{T:s"A2da$mfR5UB({!Qr R]QYj;a7svԷi'0NCvFD;#⌈/ĢdYgTEPQŘ5:c|ꤋccmf4Vg iPx\PD#'mE"L)Rl $6WgBKk4hj#%:rȚ8Đ DpB_goko˜ԨW.Vm|3h y~ m\s5X{2Lz8놟ma7 g#U땧Ge#2\mWn8Lwi2Dhe:փz}l9#Sht8g< ½-=d]P41 C$cl8˜w?Lޭay\}{ [`M )%9-Iv Pdl{,KjYY{.*ϔwNYdx .s䅤/d <ɋa¶~\M=}Q̺jF`b|L@>z Z*sb& !]XcۿK䪊.F I Y#xb$FƤp:c;׬Bfu [tlpJ!/Ƌv>lэ:0=O:t芚QQH6hr+wP](F 0!? P1ܖRa鳓-^gED hN%{JJ..yV >6%m ըݏ fOsIZ'S J aZk{{L7s (Ң/d}9u0j3Dy'oM|)N+g[:>M(u\/AmQ\>.1w^&`bqH51H)h6Eغ͑-rW)Es9yMd7nbfMC50hb"ȓc8Fk|PZ5AE`! :EUu$[g3ƸjF-*G,Unŀ%RMǰ*.;t%)U1MRQ>!3><=pmf DBh+@ȅrե찵gɂ%i!`3F'y:x0jI" IALȪ(+V r\BBRT 89G#'5̀mEȄ40Y{ں iEq41Iǒj̀YUWhX_Оˍ|e7Zj" Zv PR:В-RI$ã_xؤyȁAEx]A^2/Vr `PH$Z+ʪ!an6yf af=z/Ԓb8  jyLvqZR@M|-N%&ԒBsKnlS4\ "/#3Uq!p26ԛ%$ƂǾzBZeM9D,68<-_.֑r:L;SE1_wcu+InD ^4T]VV{4ؒHt񑖝T[9sX:Mΐ15)['#,iT4(6QȐ"EJ:$BS%|U-^͜=}MAHNlAgm5T]ЁO%w8b!.1uhRvgCȦ|ʁnm u5KvϬCJYSYkl;_!(25d13eq5:ưtRUUn})FDŽ+.1 mĜ,IxTLP]rn;f>]ƶٿ] ˁT kmF_~ f'ٛ6A5 >`Yr$ٞ}+v%V˲ܲI0"lVկ*FtW|S.~K:VFu).4^$oO!>Zg5;Jw@H* E`P5\ ŘFi5T3uS)\j:ޡѝCc!M{wqײңF$MZ V$qP *io)gzьj\FN{ٶT Ա*0%[A'mpH2Xy2(.3hXk/@SL FPg;L#/J@1%E %tec6$)Kd22AI{ߡe^FC+QhДx' :)<ᔘ$h|Q"9/-#("3`2B3  IS*JjRwHׇ2)1Oh\ _~SiBEz0ivR\r 8Y1B ,)#ͽ?ޅ(w0-UꡳyRy]r`>y:p-O E Xf4GԠ92QB9aGWL+)Gn:K?|4<-Ϛ"DvX w@MJ~zr53. VFgyhɧT,Vc!m:ŕD|pei~WɢwB|D'xHxi6?XyP&6߮QWb%'ʌ歋yvᯕoՃoggԒ*_OQvw$F?^gA>:{FAHH\$.ۆaH:ݭ0h\(\YGBOc.MSuTF6Lmֹ*biN/ %n'!w}=E镊 >GISKNם F" q;ᗿ~qX+H6eK~0-¼ܣJ eDA h}>zҍmU,$;7I[E6!$E!OLFk9QSJKF`v7ZA 8BS柜jb=р%p42=xMS? sNu&|g{ Η̳"gV;ClНā;ƭs&5l!Dlr5L]Ρ21`$VKQu}٣5AK\lʚt-v0;/GBowůlvit*>heyl*vT.PX4͉9< ڨ<A( =Ո7=dPPAE9ya|=@P;8Pq{b6EBGEm\JwkW_.FQ |ɴ!<4XHoo`bM=>hZ02evm%8[l;P~Q-inJ{HJ.+@*3e]Fss_ gUi]LeR L/GHo]XzգGQm׃{+6N身l{JH[~ pӏ bl izuRd\-7b+͟<><-d4;xE]@QT~ @ !u ܍g4^M78W.4C||\S v݁b%yɓMɔX ]4q׼#O_a$VpfZ{k6&9) ӞxK%E <ǜ2 v$a1 Dn]krB)CP Scp[*PH_(JOw;U&qIglIUY{ q7ЍczYoؚ^wX-)Wl=ggZ~;Mа-s鐏2ËW:2/o`nuvqT!h4Q)uDc!?P%:my ;HFO+Rs *+U<^N duy&Q)ML $N(S %bN0p5@s'شyޖK7ĕd:K-|V!~PPםh\(GuhR<&<2bV:.yR𠢷>^/Iv>SQe1Y6ބy%FII~~Ozr "2$SKA)Zf%W|>:82 -QBN:(R؈:"^&*ghgJ&5I* Qhff_H>!bhOJJZy(QzFs G/HSv $(σ#BLQ: D!k&JhD 2:3r }NC-#Z0XTT!pch $\p )Ē3>k~7^[9Vy0Y啲AYQɈKQ -12@pJ%`%eVp5%kXgв}!6CvmrWriPy)M,{HwQOK'U%3_zYI  v=awY.gϬ ( B ’5U$X'pVjkU5,p))C`5WH뤭M$BFSk'=pjTc$9#R` M'S8qTH S\7w> NGSC4 @`}^ Au0UH/D B,^АL tDJ-LL2LL\7֛O͔Bl% I!*(QD@NJx4$hSZPp}^uwwic}VwO[˕燒klwζ={%g ߠqB0'peD%gg% q~mL)lm'Zw)gv**GT'p* / $>)QAH4i /~8PB+B 3qnikr6 ]\6Wq[!!@LFUptU<<;.g7EJGGR,X zJ41,`Oاt<;؞rDK3EgLP>2M C"Rr"GF9Ÿ' "Ͻ]~erv'jfNiW-vVh:wm$[zItJM2,8 0 XT!D urv@['N 0b<l`822U*{ h/~/b h$5AwQQ0(0['%Nxp[@H\D"@`BmDۋIdN0;OR@ %].t9[d`gϯfH>oQb|kͲ[<^N\`>InZ3)s |a<ް9siσˈ̔BLU-sd4DՕS nx2;i=$3*D/!")B:GVpƼO"I8.v~GF!:M"HB4&E 鹾3?yBt2z'-Dv♁ve't1;$cdZAAe92jl8a2mmHܛM].9 ;{O;xނ49&Ei"ŧ@ӥ)Z\.9'1M*pOZ/`1˸X̕cK{IOUgȼ ͊gZ82Ч]qKrop>d 6%8CE"#fOtկb'sʕg 5^=m qܰ7կ ?Uv8X|>9,K?3wpz;<\C^2!Λ>|C\xg3d7Aooo =n>4XPeE'sBxGP{ාW.0V^UώjdL_Lmlf,,HO|ChP};6ÿILβφ?"Vo?|;h~Fr"\Mdc9\YsC_ũhDWrhsz8-yMDUh.׾";c rk /W5Gl8zcp4=WåKM /mw4TO<>)gģ%{{n>jx8Ͳ\fI5*/^W^^Ͳ#-?ܓ6mfs׼WX\{uVN厬*l*7.‡zRȬjo{"sKȷ|)_7zFM"OC*[pIdf 8IjsDCW@`]w콧B*vGnK3)6<0I~8^VDOg&n'my-̚.A>rwSvsfm{ Qb?u}s˕3 ߝ޽lJ^rcD@k:>tbp 6pm%+&HTDMml:g$>?Bmt2D.ADR\\{G6&,\Ae ]=yW㍝\\ۿ秅cG\0bH_'O_-W%e]"6XLNNW.c3 'W'ɫx{7<F?dz?#G|?^JBK2u?*Z9@h(S#e) RfɾF*&d)>y&/dE{@${Ihz涋%^ˋ=޻ T ոťۯEZt>h^>`9Q˻"]7~b>;T߼ZTgUNYaۛ7Wu/+r2^M'#.OV?ze:|Ŷji9Ok|w I"S Ad+!3@s+R` GV>)*IKUhtЦw*8/%k K5G*M 4]PXbh<6Fɨh%6FE͒ZeJT%[s#6_.k1tx1|oTryX%{ޘ{"cNkrzI ZP9kϙ%9쓁Wzi#(QwO\i phZ IFȘȵT2Ʉ c2jo_Aӎ16N7yg 8״kV-mo<}Uju#U7ul2֤T߬b^!WF_/߼mEp)~GW4a@KG`kcܜb(i;G?cA|_Va=|UcXMPt!Eo3 6ArDL.WZaWJ{zpe\ehWVZBf=St*S=\=#'5ɻ<[kUn:l7OrWW'q~65pͯZqx͓MԖX ]4yOKdq/fl"g?8_~u ݦ1- "#D^e!o݅;.Yg5G4) :YT1U$H!BR&i#F}r4ں_e}Yg9Lzn3 |SƑX#hZ˯P|Y#hp|4zp9K|޶AOE(PDSѲ9ԜxjCN;+FDኵNu1< 7AI[!o}.SV'!{.24r=CM a3gE3W-cNMLXbH$hSs?b8]Abf=jzJ⠈ES/X6䒀G>WM}"RQ|Q(gS)$U[ jqdz;;;;N;'!4N QZEMMA  FjSNd6{FUc6RT謏 +1 d⩴]-,/pKNt7]|Fd;'C^p m5N98ZvV{] ufQ!m]'W|z-Dz `^¦*n 6jKIHyek 1СvXmL| En+sNo@NWdDhX^RZ㪮sp P%tB&(%Xa'}!.tH| ~2٪lo,V~XsOwc 85 82 k5&<Z /]4Jg9a^H pzw%jՏS.MTzzfiLv}"UTOa.0rh僨UOUaTҎUHMkԴN=%ܮ.ՆÉyLPQ4ɹ0HЖ>PĊd -jTЮ;FwY  '1 >}L\J-&a`NHYGh-J-!S VSq2"bhR Z")/Äö́ю=nU|5jMLyDsv2l㟘6Eq6]4f? KhPRȁYZ™Z&*(s,4'.{-& .I0+"5!d <(;9(f\3GU0Ү9>3e EFjD΋bЄ+\4˺6JyY^XΊ5r+e;o?,m# M7< 9O=e)>8Ѹ\ #Q'"-PoO[Ԭ:jᘷ6+\!)P$}:& ZƣV1U+\z;V M)8"XDqUUWJBN'4Ĥ}oNՑ(_Gb߯o#b)G1& `Po&nьxΘ.TiBd쥭6BWp;tMC FK6{uQǍO&1q7$jоţxN%.!;J(Z :5< ؤnt _AFȞsK7i|(#o׽!GW.ŧrI+~c!ݻ_[m1ؒ%Id>PgPYSlb!@8ǫeQɐƚ Iq Iq}HcP/!:l|`$PP| dU,v[Oag\$P6F6LYLF&^(X3ɗyl`+pTIBKkk>u :HNRXrJSƤ +GD%Sh tt>A .!I00fW䮋uFΎ)!H/)#)0i-|]+KO@QIl!;9&k꒒N 6y,6!] uFD"t1IWBD=W#) gCA:,*,W-c4/!RRgcpJ6l"QF _(f Qa/c$cji^B־dه}FΠB@X.5}\ڑ/[U0%d6Z5.B蒊:oTz T$HgD%FZdjf94i!.r|r1\yfEpRh!-$S6=FU| (r6<9 aF|U70Z.BlIۍ#Y?aa8MVybi4n(^UGg_ =Y9<NMQ>bF8W,Gj:a2ծ&{A#9eN[:u6w݇w?|_~? ~xu~+0Ċh# $YϏ[ 24v92rƽ'> jr֠fmH>0Nfglԣt ay&|Vo8y-UCKyni0`SVÏQ Hģ$;!_h Od2Exͪ_F+lyiMpXlyXfb;LxeBH")JȎb2PY),fXu:99LxɄη̳26Cw^&tjt;dKg"ul xKS %'=6pZFZ~ªգe=]+]7m^iYP`^`/_L\XUEcRW4~W4Tun_? sQ]7Ҁ/l)]2Ao#׽^\/P/t~L -5X|lU,:5f^{~;+gJ,Kɲ &K0[OMU_ozQumDitB&WHQ{~W~ յU׶}u GR-%Vl fe S!h?(!`]Xյi_J(' k%[)&~YxF8 Iˤf/,݆ŘLCpLH6KF[BX"eцMs$̱b:#='-KblE`rp$ H 8@ Fc)$zaه]/_"\mb,Fk}^`04JtjM`34%jVW` MVp1JE*r}#uV6Cums􎹗qqoJ;,Jb/էeg=WО SJ h* <ꉂ7"cmT"mT B>QX(ԛd1/*F&DVc$9#,Gu|/#uėNxO_ew>Xva[^ 煠ޠ/YwL( oNG{2 V$|MMQF*" }SU3֫/p͔^Cl` hV)} 8 fًv-苏"Ylc3k sJ_w#rZBi ' M.!7Τ3R{*/-x7ec1EHe2ǐ$$H$I!r %KSBPM9RYz500> VW]I[WH]GxtFwmZ*`_b3{iqo  bK6 Hk,YCz,Q?E7琇<;YTE"~?69X q[tPta_L>xl\?1SVy<9]ʒ邗RBV0 x%exBEmnÀg::z'繑{'*TX9]tZ+ZB$m8.S)E"Aȃ+ϱߟ&FU~+gǠO|K88S9K) ѕ!VѢmҨ?ب}3Xo[! 4(:g7.K8ݏڴղPb8(1ǽ4)9a)@ya*(hrǜixvzdmN%ȉL31)Jʰ y}w. c˴ՙ4It^ W5b*x嵥v;gq5/d86,sW^ߋP"hiWOo𽞿/P27]2fOA٩c_e?t9js~;z?d>p?UǗ_0j\FQ4}n~U\?K9twxw7x4\\-,.d|z;#Ÿ^> 1\ԵɍŎ%/Ktj|>ʾ]̚oտ?|;j~k6lm lC1UhhKo-;Yv1o vcX%?FFo_@OuЄNoxCdWj/-/rO 0[u3 7K?d_̛eH(-Ēɋdu/-!eox/7 & 6ѹvrfczEb>$j_9{uUC^͖u{| \ rb\` N{uAo>MU#py7TXn?KSi-s)O^:Θ\"JHaלt>>JQkvC2RRU{hJ>S+dMBu 9JS[h>3i7Ғ2XOƒ `\3JQFUVXy S=6N)\Y<ô[v ?xdr~8\bn9&g'g*ԓYG7սg9@|-(0y%y+d%_x}+v%P%GԒٲDPeӷ*تèUjk+JT2Ʉ̣Se z{&M[iز8Yx+ +s}~ΘvltrzBF޺웍6u]oe}(ΈI>"rݩTa]{JlC9PfE64)q@øz%O t1 o暪2m DWx~RU=ڈVQѬ܍WQJcӪWPp5 2 P-ס U p5@\I-)&[ @.'6\Zu*I # 6,\\uj9 URp5@\iC6 s&\ZCT WCĕ1AlD4BVĂ+PKI d$jFPg hZcBieJݚ^b c㪣:ʵW\uS)WUAIvղlcbP >9+TlB&\ WrIxDBF++U:t\Jm+ObuO*9hbTr# !RHV鋽*t2_ /?shR*ȯfuz}w^R]%{*٣?7 #=u.JbB6dE)X٫婕纑m%-Zճ\pRVK0'eɽeQ*ܯ2BڧX /p#}F sO2Ro$9r:ǍJ&ޣG[-dLO(Ɂz{yr=9T=9x(YJYC,\Z<9$j&Q͓`I5$\ZJhBT&\ WH%b BGs(\\ccՄ+TIEqe"&g P5 > U!R&eL V"\\]JMUճݱ RW ֪g\uktV>m2AW4MO\D+̈W(E+T+MB9+ƨ01 #x(\\ Pe T*J+Θ8$D%_wkH[JNTҩC͵2ќ2|Q#FOPW(גXpjuOgvTکq WGL1P1 j"ce;R$gp2+,H<E(F+T˃_BB$\ WF x+s.jMB6YWCĕJp g r P Ug+cӳ(a v&l]uRK:kvñb Wmz8'*"\`Ţ"\ڧ~?@ :jbJQ#fG+\+TXrᄫ37‡SƉO'hT/IiY؎[l"EmS+!S+GmS#[DG0Ca%#fqßҸju_z+cȓh<9ˢP{rR ГiY&WXpjMr3=lpp%t D++}q* O Fd]`B@ł+TuB6M WB; 6TE+ˣ B*+T+c&"\BQn:D+TB+kH=(@:†C hAT+lB&\=\c`K8nry'wSz:Nʍ p;'\=֒pE-+,\\c$t\V+<֨p J P T B+ܧ!'wi›DkoiItH!6nG"Om"Ju{ XM.ױ VTith7Hd M j:yfe+0tEx+I,+T)ҬqF+Mgh+TDFik2ּ@=[g7}rM ~ TJ+K1@`ȕ}릖q*Ej>;68! wsM6છJ3(:J$\=)$Ni2\ZC6jbL *" 6 Z @퓧W⊃uqRVFCEn^|i\q+Q^.or *7sj:β.J K_4U[L!x)tӖlIhaАqΐ^}u}{n]ٲ6]qRyUAݺe@q V/_˖qf <ϟy|1WBш &#'`=-' 뜁7y?GqBrO1F Sq=UWNҊTV(N O hdT\- VԅB:Y WPkOeDJZ#hG]TpsRBS +9'm7#KJO;)1/'d00l 0*\)[Od"镊,#=H$Q'DTeT)6s5Z-^X5uVէan3`^ihn}uUQ*bMԱ2Xܸl29kGs" EK1he%rk5T *ckds-68k۫N=hSts6~-^&Zzh!۷ǐTsԻsfc=wVLtY;\ciJM#$JˢSm Tﰘ1QИ166{PtPQ-^rJse?wTF1S$Q!(hQJAeXTe9wLFa>\&T{K윈=ٻ0F6tQ b~;F]F\T>ߦ7msrJ!!m8^fų 99rX|0ԼYzYɛJj9JJAuAr=꜌%{FkÒ|XvoQkNҩYJEWRHXhhM?qA +&LH/*͡ptKU>Z $ڨj/  eZ|mE5J`vѾXl)6.C2+hlnƢVnJ9G),XXtdݠtGjOMÂw ݥfѷ#6*1eOh@ZE;68K%- O9@EE+h-a}Ƿ:m^qg(*ˋADUZu>:@28!VN5ї< q+Ye0Y\W&+3\5˶plQT3f*hJ֬G7S*T)Jp\;wMBAQVqT6L)s` xIJw*P Ӥl(`_`\e^I:. d> AAQ)tJ%Cez|=cf2 V첊޶X ukzg] R ++#q2M!P ^0,LhD;iߩ{"W*:χP9A΂ŚE kG !.A DjĥPPgU .Q̓ux+lL6v_ fUM5T gvJAQŁ.єfՠYR RX6[jAN)Ðn|F+fVaj 1xe`+/ʦ$$Ɨ2%V52+ ʚ%&tYk|D d@÷B6фƌ \+ =IBf<γwA9/.^bF]8c)9Yh(PlʐvXNZ"_t {0g;w : mCka%a1-A7 /br*}vd 6RfH|!YZUF2jx  .΃%-|Oy L91/t6@2@vtm ߺ ,\/d!T?Q "jΩF(7YK!VEP;>Ddmt}t/XѺ 蓥8COe,l2V8 ]{ #AK~u9A[XQ).wKq:&8l5`*L-`QE%rJpmIJ XQ>u )+\3X-Dft aroێ39x|#Lg$2N@GQjc<Ơt% HƉKL pyP*ZxwGDޙ תѷ0aQB]%QiY&ZNX-ƹM[qtёpibFFJ[wk3)Z+uƥx/G},e$En$F ,y4]*qaVhm``+ܴrv8_h^=;\+Ù$7]- B1CM^,4zR(W`} wR%;Y:5 k*f\kM!jK9O-'Fho1PF~3?.2#dbaÁwâDlwHjB[E%\IaۂNv0 3 )aɔѥHπOt0=[ 3 ʼn+MkMk $\FC_$o*V1 /(q˜a `p|1v9ܼq5|xw fsrv P"Dx 盗Zw|wyOկNŃn/c!YKG8oV=9~'u]U xD£i@$jg~ŁI=$)IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$o]Jp؝G QLSO i,O\|I86JH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ |@(?:lv' ;>j@@i@$gȿPZiv$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I I=$`3U1cWO= p0_9 VzZR_' 0=$@K賓@l% $I/Z( n&\\IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $Igxk=xz6?~<[MQo׷/7wi]_!T] . .. .)K@%,Kou>tՀ̮@S2hgHWpѹ+3.W@k()]=Cf|sx4ݪq_5x'Q@]-U_Vc*^r· 8Qۿ[ŘMe^U}_=BBVӫ氜_VLg&7p|4mNOlƪp]Wm ?.N𚿮˓ j4eIǂ8orFc4Sv9N݌d2Yցe/ΫI.9cVȪ:Ev4Lr-\c(ZNQxWm3Z)XRpntld觘jF08Z ?LI uXx;]]@ɦVu̾ԳnNxIʴUc6mcP{M}ƕ/mך?'p\z؛<{$3^Nu=&^;g|56z?/?:RlD%)qo)o^q1~,uLnտ}\˽a$|Ds(.n/?[ڤQ"ql1p0l=]S4ouogE63%z\] !4쭶䆥w}6MJE _fSw?m$|ZOݣl([-ޖU!=ca::V~'~x /`gh}7*=GG~R{zsO }M|Ex ٿػƑ$W,}؞`.4, %KI.WuF%KNYAY-I2"`fp?V|kwZ"m{M܁pKkKɵ)IT Az !Yj@Z+n' Ǽ(P\5WZ|aVY3/'c~.mB/تw!>ޔ%ysW\!4$e9v:s!G5]_o|u1_ r;l\b nj 38fWOaVHٛ}|[:}sg=bOm#\$^axfU̮6c.œw7p*!Yvݻ,@['N 0eSI1S 6 G@pˍ:[X,˭_aa^ D>$5AwQQ0H. Jif d 8ŲPDϿr-jlx4W7!7U5;pնG&k{h՞`E?u\a7<ް7W)Ӟ2!D˜B>3m􆧌L77=$r-y{@_60j9 p;3iV#)M Bromo}`#8忍PU,fs>yuw,=6ʼL4 C{c4Zy\]=G_} }>=?tT C~пx8Z|yb0[Σ W"%SܢeI"xzm8h;VA{} T߯d1dZ 4V7+I_XьibeN@Vb(9 ×_rQ#>QVkqu$[x?jgpRSKv?gBP@':xpBڋqpqv{?p^e̚0!ܮA ?vXsd!BiSEi=9rgH ̩K|o;iKm~R"%9+!'Aq8ͫL0C}I-kʗxݜx -\ݱ?;̹9tWSjp;Kq7_Y {&1iΘY#&fVO}qnOXOqu|Q(ޗr/s4 ?\xA \;)3ƈL53{N̚9pm%'"'72*b Q5qm.,GB/  OwY.YûiΤ#s!p &ꨗGXFf=s{oS+bošgt> 3yy=)워;qNn0.x[uwfkeM9: Y=P;V?'e-;v _\N翈. )|{?v1P {9qMUJ`SY] 9xυrT'p@)$7)@(&!Jgb 9nI^8_,NӀٚ=9rEV) :T=:P55KeD2$S+AJ7PHʜkn/Lё;)g[nYdeX`"ZGDA[Ü xZɤ&I%A53-+e'. FS@# 7Npolg<T&ƍ#)Z4ŖDqiW"kHy%CFwWA,!4_;n?&)٘>ZNdRΐOJܕVQ9;+۞,NOweG(D,EDzch9 Ay!bX(M\"%&d' +H r^Y.IYVj-wL** K4O iFh.lbIʂ<4,|ļL}-țV ZdWev2 G%#.EG<]e2J%^ )xUM5;cE19*b>?bmv:q:`{w̝1 Nx~%-3+9lUMP&w2YPQ+1˛.Z4N*f #γ$7]6~rǜYg_Yk%`!xVWL-8buJVYGT(EԋZ.%e]A$IJ6 HJ Sk%=.34Ŗ>BoP%6G˿(ߓ/ ?k8}𩻸wj;hjզhզVmG?zCיDilW;`½jRe"!m!~T u@h&U"[ymd>5 4?6[:&߼A罤@ --.CtR+e< ,mPSR$*ɭ5C@›eٸ]F<~?`Ay&4pe$4E_Mm{7ø;;[{nM nҽ'@ߏ\eXб|8^&p_W=ģmO{.r&.b!囕.VM>8E@"Jp* *`+A8q^@iŃ N@N%eI1)0Q%սGG-ta/(J>#|0{wUٻҾx6=IR\]Anࣇ灘E.[})?#)x+yGާ'M =B%!j\"&xb Fh h8콳h4+gqwۗ43wr^eXlJYIFȘ(T2Ʉ 2ɉ}2ۢ7BhګICGo!/׈6 `9[moo3X\ѣnR՞DH X9N""$ MFkw߂՞=`E hc(p>:nM$G;S@,:m/L(uv$\7nkDnP6mœqI`"EN1%'H”Xh^T3b2qi%6P>\_q]Q7]']g"]RS$s4 B? q*mƏX>(N*GV$x6{0a."{PM1."*4HwP8JE%O#5P21Crg. _pR!Db:Q"u y'wVz=+]rvK*sc;&N+ ~ˋR'T$I$ hXt<+$iM$C@GPt>^K! Sls$i@xT͸hBpRٯaZG!,i呓=P\ +&'G1MTRC6OM]$Gow]y]7үzJx(Bm%mR=PV|C\&P"=RVAA [ %76hg 9<֤Qݟg?>QYُ)!sq2&j D_?}gCΡfD7.NNbHb;=w{sz$"L)&IItj3yVɤ+KܖxoGEnљ̃:yJ|exӂa22w "gSZOI|7yy~lksmgONxmQ&()=wgnm@W̪x*Ps_YХ|<0}t_I~,Y >\'_}0jJ}j2Fz>~wQqwl_ )\V{_,xj:%3WAqZ5uni&Ioɯ{p/[0{"kk߈A@=ALd T4D|8JcPV@znT@ T; dCN9A=s0W#y+^ 7i6 88%vw#KnyTY[6;Ie% swӥ3K6ߘ$Yup4|ͦűAsnݾٻF˫xٕ׏:Ji-^}6<.+9oŮ8e׾ Ú_gd l)͹ d̡ě$ 3r Agѳg]Т.d*5h-@2`劄9d/I %Φ弯9yrM]fC:{*_u'wx-4\& \Hpfk&0 C0"|үVԧOEWQ'!VE*(} !E#C\I* Ɗ6,X(1&5y9q@c@r. Fc'tF͒ݣ">wQ 1 ݚZkЌgidq!]ܱSnm5Q\6SٱYZ@ KĐ1)#&D.()X /Խv&>ID{9;1nJЙ$DX#GIb&Axn*QmD Ӵ霄ֲR1$3:jY*BS3CK),#d3{Vt)+-?HGJ ʋ\\1E\heu!j(BD9!SzԊOw-ip)Hx˔&U$S1:L+uV1 C}n^Ul'>K|?."$…=rCͳtI FcA O - S/mFDL7b:c}mRM#C&*9Aqe ʔM!2ѝx,^CvqI9ž h;G?ܼʼnG{ ((>䉒MNwl^4\nQJ0By24Zk52~zHػG))OIـх0e@E@NzIiZ`])\,'RN|s" "vFe> ~XS{5Y ,PֱSgW2Kxx'Y&g1*X0knBdb̄(5G$S$i`B`]T[oMBAq_.9[ AzIIIQ*×YجwM@eh}9) n7A`:_PM ӓr͂cf&"lf9/XetL FWʝaza{'C3M}!hN: (P)F 9Q>eldlSW}.}Mh' J#Š]DvKFdQI$9۵SR6\9fG댾G=ADim *mWU򐹳(#QրY' \kTe$4ڴ}9>90-WZ32:i$l!8Lq|4Ad%X<0,m+}"kR`Ph R2X<씎Bq'A@#u с Xyl<u(I{mDo?EIJoYʳh*0ə͊G,ۡ\1!3Ib/H .)*ќԞĕE RRp %SLܑLsoOZoqtDS`ѿ(`? 0Am5#J,/f <|<+q^~Tϳŝ&Ij GpF۶fB?-i)2%weK&. )~:0p0$5x~CW-~ZЫ.pLj~`vrlܒdF'DžptYܼ]?>a|5M ?TQ%Q}m^V6jtYSHxy:\~h,am&l)T7 VWht>^kkt~:8I⎭F1YV# !F0+|tԏC;sئؓҷt6v*oH ƍ*F?ƣ@OW}w} vwo5i9^R:>Obiz6%핫 >]bM;5 F*.N?⻗ߗ|{W?<7\oF#$Ӄ]׋>kUߢki6]C_=mu]~~oqhqiYmoG]\hV=bfu,Q"8VqDC߯QN%U<5* tVE~;}X$'گOARIL@9T@02 Arh!σa&i7:~ҧ8,T@NxAeg{\Wr`.Д ,}Fϓ)`I~u:Y٫~՝/#gOyEƴII2!Ħx\sŭs6lTTb;Hq%;PgVځVueQ yݗ=%im/cԤmVl9;U^n^/]6.TV \N>3еnl{DH A%*|xV=:$g?*)Gt>\'>Q 쑹aFz>$h(3Le2Se#gBw>p<;jճo.4.;tRgѽ$tUL֮oV/YF7̵껎aƏw~NEZܭ.l2:^~UNgI-gѠ8R"n>;$b2~7(IsgKASϗS? fګ0ci"u19juvCIyJ~/zjz6^bpd _(2A_^}Gnr5dG>BAId|wo{诇+6o o[yQb /4XajAbH#&2So}4t@;&awdA XgZ{VRHyf>ڧVKaMS7 :=3:P'X;ٴ{|\^N>9?Ru>ؖeBe$+`/4 ^ݸ8N`ƢGF%I* Z/vm-9*u/$ɦ-9r*RvDـQ@c(ls@ΰJgq-]mlSDW\oS^8 M8?^߸N@h2I);d?-h2Yg[TF(-3bYTA,ɍMelb7`g˞b*"YױL-vN-/V-= ؽ8(2³{!j$ߩXA$^(^FGa!f@[Ģ ): 2#C@"˶&HK#$"\tjkyN}-⟍3U8"-,`xBJof)C4%$&T"+U(/թE07&}CcAڕڙcrH XE.%%Z,`1u]ݙ8[3G.Λr)*9."vqVuY^&%|A0:t"%/HqQ KAA.>]=lukMgm58_EGd{cFpG2-8 nx Y4Jes٤:vգXR1"5.Y+W4;>Qy#.Q(Q"ߑ"*vA ŀ1{&[pY6R$uC53[]\}\%t yS Js1c@U80lT@w%zeaMD^3[7 .Y1Fů/'Ѳ:ہ^em3g« &ȃv ٚx]?]_p##=8]tYJJjpŴVbqҿJ,-*J,՝fhV)yݕn"Հ;QHSl"; &4sl h4 ~0lD'L&`$eI: dWAdSFmurΊT_I(GgBc}7D9 $2C6A#zH,OgC^jgPȼb[vabNB<*G^ rW'hUQ;mBJPadx 2hJ4%M !YDU+ڃbbXmڂQ6d'lU:ԖWI +.|hs Bϸ0DT5NZѹpBG3KJy^-)kSAc$KG!XLYGlZX}mW,*^Q!J;XtlFK&=}ŹarnA[WۃAۋgz{84 mCbSM6vQE,URr^DS F$Xv*Ag;D(LFE.}}OmkX\4.B 聄J%J#hk( he or,f۾kr֐Vڃ= LXXf:yg{D@U,/ ܧ21w4K~:i([YY$0@.%Zc0hR>U(Ib/Nמ E Y[AFQh3Y bMM(Wv|SeFk^[E:&3?3R:$.dxȺMI PJ6YgO!lOZbMm 䟛X+@ؖՍ(^2 |3 xGMڢB3b[2ϲĐד:/CSѿaa},qx/uM<7%)Mrj^#?х'a!ARF)P$t?l/N7c XGI*W:.=n#n2NoWDco:ɢef3;LyEW{8 )Y׿km7ܔ |~9/\*#،~6ΜRAftVgچ<}{۫n xq3񤜏ߝ-gV 6j2: Ԟ(Į+)J_beh2kVLikx,֝|:95ž3vUM{5UR/ye:klʨFOFeu=+rdyxƗg^՟~}~ W]w< J{Eg 3.~KcW/ri]&.oyunЎfgmH7~v?0u{3!-2ۥV-S Y8rvjTA_\ dN=NiajQ'U֙+Q UD@9O 9m7"iċqŊp(ڿD删bN!FZSm WaQ_pv&N 1`ZU^Ȩ!k!dKBs1ewu,5$A"Cla/!>~Z4V7W-0*-=O/Ǽyj>;z]ggt+iݜB^]Lo5OJB=y:V@h}H:*g:FЋ;r~cl7Lh5-9t2Ώ~ ^S9\OM->?O-{@՞+N8 eL%XfmXqtKɎmhtTB(7>A4CO0꼸 Ʋ{V=E+Nc(wsaZi?mEd)I lO_u>:pmo+ڼendrriYg7|v{fyy:'Gݜ8l\_Ç7F93y:|0N>?YlO1ˏ>2r[b^{w'gv4׷>.7|QwV2> u"miJIbSvwgh]9G )zț&ERB`Vrȣ؞ ܉w4dG4d~D002$`ĠCT>mWEjw sos8a(uw[ߡ$;vŦ;Q&HK#3yBfk|N JLbpƺv+Z}P6gs$ r]g1$ CY`m\iRfzH| y /ԱL]xdΒa6f~EN*JKtTR#3EάAy,!Xmyb8$ fOAqϼ2+kiTB&O6M.,UG->hIx2uNHAAT3 Mx&M˺\^XϊsK=;]A3; ћ '@,"h"MM - ?q.8+b{t$AUI !sds(RׁM*PLe+t@)dK񖮽z3, p?]\<9$ `ͮ^{@H(jo.(.(*ukN4iN xC2ET OL3ub9/zoQQv?-QJhi'2G4kȝ "D锨P%y)Deq{s5VW!* fnU^[0$ R W_\ϟ]?"Zr ;VCLs%'b~~]gJ]!uG~"c mUTCNJaUNzj@z+K9\ pH>0^RoC ]Yfw /{kxv^S{p}N]r1⒖/Ec\b:bwPT%;jy aK5O $2їK=V#'z̭d)~`d`C BJHNyƣǏT)cx4"Z-bܑc Lu?{zCԘgNo. Zm[n55` Th .#(A4XT>-@+ ':(j׼lĈ7T& h6, k% RTkGqJ<)PL5J?qo6/ێV וVS6Y> Ľ^h8b֚Zx\:oRcG>E|Ρ+(G..O~ f.:JTzvQ:s 6Did.&<2 O{ܼ,J V{^A߭XEup86V6קܧ(pn*NDm&^c9""|hģˌ0<Cc8M٠hI$4NV=^{ʺ Ox[-~*~'?܂nrO!N7qs)J=,DUJϴ^jd@e21F)D%0 PAKcI9%)cx&7=MOoydFd lBZKFd8"I2 , T~p?o|g8?8zWZ!Aז#D@0i!%k}1X~tG !;C7 u'4Ďy d! ugy6#7^ڈd9p+4}$J,`)N l 7w75K Qj=f^fyAc V]])䡅w= fMDT!&;}_S V vټd'm(}^[kw{BRw瓳|ʺ=lsrJO j..:zG 2_{WݴT9:9_}6s~;x_T9d~Ug mw-F\d&?ٳj>[l} ߻|wՏy7FQsCo_oNQχz*xTƉj`8e$~G|Wq)lh_~A w̓R߭_⣻8xDK.Ƌrʾq&- e̚0j\g W^6B`;;Fy~H8&~SXcmCpc>Zt;8n ggZut௅-wbU)[: ;vk(.A<~ >-l2aWf+o>;vn8zb\:Y$\3]z<7 )owƼ,ik.SyXY{ ^ʶ{)-4<}~s[ND$*m*{\ "AS#="[۳/Ћ/㟀9캜 &\FII,DeFd9g΄U.SVF3Чs[A[O ۽& o8PI,PW?Gؼ˴.[G|5U)@qr_GCtd/S/ @4Xm˼IKiWG-߰(5oȡgr:J rG"}.Wbq,)H)7)HO(i&b{By|%9|3h17qfW+I\ TƔ0FpUOojjB vTZ \隺`?8[v+Y6RLo76ܭvٹH^FeA 0k9DGlF2IRIP@9BۢD9KH# 'ㅳ`q. Iqs »Ĺ%zXZ";D eI&0ճk^%AsԷJ& UI+g%G!ab@/=IqU Pe8/ PoBC Et@BL&.^z\UXAsWip=2xP#h-"TPĢƂ'D4I\pw)ĒW}( 2@G4dAbʣ&R+CAX*)X *bw!_H W)7D+]!xڞM.њ=`E U:=be>nܐkgژߘ-/rA#9*m4s~穵20l`yJ&f.C]Fy;1.bnԾ!Vz%;}nj_i1;IiJxzHGUbԥg&A Nux 4x /"p.fۮЮ:ܷgal[Z( &} 7/ê=ky;Nj ,Ԣ:k;x^]-TUd>`B vyZu~?NX1Wc/%8uR1\[H@[vJH2jNY<~[/D_nNJDKHtqq貐.Rg-b"iZ01=!OBdց+ 銉seǁg6PL5>""4)[+{؃s r#sY`)eq:0v`.Kid GdBi5n +-9BBY bPZJJD]=Ese ☼,01WY\s4J+5R1WpG?VFY64keMUd]XeYǛ,`Y,p@ ˞cݢU?dK#ն[]G<'Ɩr#x}qQ5w7UۻZo -f6%vjr@L-l\ ڽabW?vr?,>aQKzkW4]C#W|r!-&z߿{L.-gVx`G2p˽//!S@xwܴvR;GGIXhMtWzwh e,sl(PxFAقY4=zOLΝKv:*V}.]4^;ŀXX@^Z*l hs2Q{A'V`.p ۄtYDc¡]+Ew>%vQ9[TD*Ӡ(L)qQ(5dR裡UU~8UjEK5R!\cfnG\,@fCdu:EGLJzU+[MJ)0*%P] *~&G .9[zo7!Ja?/Cb|TsEUSL)b Dr(kMU*#FUp H@ mx!I"$d .ْFX9 *9,;v6χ'xޝ6e[˯os쨉i߰l~-omݩ8_ W2 wʷWJ" +Ǡ*5ށW2:tEFlVAWgѐ&c}*鍜-*eyI|!:{E۳^4^MO-kK)u[>45+ .LQz1Paoq_}zև~TzcOF?e@Wo]EܘDُNcgJ\v&o)V5\4ScUL8MΖyw||Gx# .wh@$IC*`s9 -1b_wD,*.N׾t_lQzÚ+vy>t Q~B8 k6L?O'rm@E fīOWIK«J@,QyJ&rΘ<=quغr)S(OGv(d f쑀hȺ!_dAPk $6hVq6R}vfCʢO]ߏQva@+.]u,ӤٯյTsϏc6v9i ҃.F"f0>&oDثEU  ޴BC"`@vF*fsA,gurg f70Tym4{ UEf[gqjUlRXĹ蕘 E&{kFS~\O +DLB\dk\Xm)[jm,B@]VoP+~'w%WQ,DW&&W*EL΁KG[MRTF1ħ_} oGEDZ<53928m:IM% ,(ʪ2J[섧m=X^`"CmRM#cȓ&Q{Sd}RɅI{]%%_U/( rlorO0=W5K ͛aqgd]2] :t@dJ66ߑ'n$7j>M#QI8FH AKUJ=eСCi Sz/݁lY)d54 :? :?fm) ::El+샎ؐdrP!GUKHޡ6'U E6irRx .ye!t&即XB>VF;7grϭ@ѭʾHvYip.% /x?GZ0u1gcj[5o.r)k+Ƒ"d*ڐ5٤OZ[v5 Jz#g-W!()єL芾K&JR7+*,eZ2%oꪗcU.|k$%6*TH*`lu%#9F"+֐x@&S%& <5;m@]q87|ȋEa:iO. ;LRk"Gr^T#\ձfT9/es5ذ:E]H` \wב'sDnʇk<7oy2;2 AVX={CNkY ?;:gs^u" &GrhA 7uZlv8=ɳŽ{^~ë{?}{ח{V_dN9ax` !spe_tM_Cx7 M}>r·7ƽ">Ѝ9i䧳|{XkKݪG[l6ܖAaWlEl>[tTlۘ*NxA6ajPwڀ^y}ĝ$| TL TMarB+"xa$ 䔫NΖO`Um;P: pGo<sa՘4;bhQz)e|wul[ͶƘg82OIFX@Hbىز`x@BzR UϱYq)OLOOMwuW_;OF^l# $g\KrҢN(fx{Jۊ6ut{6x[ewrMCԡ}uEݳ,O!wVklJa^iN1iYf"FȔT u I*,w->e\km}}(h<vq؊#W;8: =ȋs:JЃ2]OFƇ<(9S3mn'9VMQL|p$磤Xdw qV57|yPN|>)q4ou ٧Ps,p©$>Db86 _7Aϲc 1#A >1u XpO)p@d鵓2`*}p9m\oi+?^I x l l誘Y_H&pSp 6ufIWI8Ove{>Z\]dX_ۤ5!Bnm<[_CšUZ ~bVVdȡtO9-l?(jckBJ敶^S+j^(YF껛I߼΃A-?unGT\:nfeׯސ3;S뢻殮e6kOUɿ4;QeA5puHn0LSʹif꾩-ٙM,dBE*QBQ㙺ɣr344 Cihr(7)ȡQ|#3!Mmi'hbNQB_;'iNyDFz}~RȉR4gKr_+ѱ6U?n!/zԔܣ+1 Ng DFx* 2FpOh.0+]ԸQ$C9q4"VFudXDc<Ł- 0~$ւH3N% #َdn[ 4fhw׶gn/C{>o-n\;v-y"|ox u~eqBy)'~e,:u~Ih78s-:2 }S1w34I!7$׀ )wp)BSϘ8P=tHba[Z+IsZ ew5Inx5(Z6>KoYv~tPif[L8÷Pɠx}Sc~.D{0{7" [B/_(sR/4BSt|0z [~%`Qx}GI+1ۯ5yavex&0.  Yz xIgZgv__xEDv/^V..$ce$VЪ2\_mp_e#Ϛe "C_؆a]]yxu\Nu}arUJ Lfc7fiyoI;84nη@KrZhpMU /nI.L}|TK @I|:g;;U!QEp۰s?s:؞l$K:cIpLfB93"2֊;èXjcJ> €XOΗ,eBG T<ܯp>'-D|js.u(mD41*yǭՖ3jՠ,̂BhkcNDxk][vzֳ<(=NEτDKZF+4h9 ԢB5 ?4'x{Q q͡nZh}L'enn Hl FV.(aY4Z:,XF H2%"6IlӘG"hlleF q2:ڀmTtLUt4<g^q(!3{v|l=V%=,(#e#E ?ї đ#M8ikIYFwJ& q1||_mޙvo}Ϭ baxnyH[FTC"\'+A"E49p7#n.7]sAULRAϺEDHѸҽ,\xŹ+fCp,(J] tSp1ba8Hr @RZX˷K00Q CA%*/q`[q8 *Ba %03Y%T8 yӱ b/apRxXZY˸3<g[mkV[xh#K/ƀ NRuk$%^tJ տX=<럣 UćtHKr)F;0BD@T51* ̐Q,vȣ4cxɩoMZZoЀ]+>O]~]3UyA1mbTI2OA" 7BIɾ_7O`w߁YTtd1Rb*w D4ۥ>ŔL5*VS n&no~j቗)@PMC"o94 Tu딚B^fUu Ή7f tSpUT}jR\~RC֝u4.HJ~*LqHXߖX[ܘ_nu9?؝b;Eo [!m =rˎ^W*縂W?}w{ajPuwE>a/gÜw{WwDu NԷ?'W̅N_}?BUh$_f'i> 4/װt%@E EDX~R#h¥ZJ!(0k;.ĊS\H.uߘ o7sqH]{TQm ]b+09ZL>s\AjsڡX2Z6eMU,ä4PUjzնTen1q&j\Ts{jËE֝ffML].Zchexp;M1QYNbNJ&Mt2 Q FG1kN7z;>4U6@9B!(Glzx PkS75z.:qec#.=?ˇ4 V=cŽ[Rq9?o_ȹ.dX̵Q*g er+ B1]rg܏IL`T"j7.H596VRX@,k%H6@dCzE"o!t =gʚ8_AUQuX*3v(uX :C}(\d+$.tgwUe}_VI`uFwY#vE#ʊ7D% r=IWP*WEZ=^|q@çruVtؠ#I$4:  +X8!ǨEKG 8:,8{bRyF<Vy2b kƒ5"/Љޑf<[0m|oL3_6,S3*c1q]Q4JI 1PmRaARјINhe&o " KMj3bX|c'T7 ,*s*p.ۻ}!\Rb}peLv ۝*|Wnpϓem\7p/jօ߂Zn\j m C&%Tq-H%s4_wҳK==W]ez0N;lXPyuafƃ=jC&TUCkn6[v]4~sWz_;4(ךZ Vlp禓wsrҳILˆoDƆ2U)xʹڪg1Mϖ ֐I/HxBcpsT۬8i6ms]kRh՛筗3 bo7 >t8X s1k7?o^>oV^D;Ihè~{p}}JLV v*L.K3,%W {LN`0?!udUWSQWYZs 6 +51L頫,.'BR]=Euœ梨\*~*Kix҄KuJ VL⊓]ei{!6d)+C0d%U'Lrs*K)2xԕYSzpu0Ճ%aٮ&%gǥԕi]LBNCvZZ׶?NV??M3]Ų 4fyO6Tbqo42q$\:c/xvیs@fS; sDT5z$Yk& 5 ;R+Q|dlZó֧oz?[[JGKU8囏HƗe+Gba񠺕-Rb eCCwP% G{> ?~,NC  1}Rc(R$ {.N"7󽄜^B>|olQd)/Fϕpd`Br-*I眧ĄT1pc.gƀ.'\1'I΅riuN?Xftxp1qVFs9=I(#}^ d=,,0GQwG]|_ty`t W_S̚?,Ч.pi৷8yGR<+"\B,)iqTĪ|ᣄol'sCY"Pu 5JrubOY@4EãUJN1@yYI9u,L9sH<&F4XPq*hSI-jXzMY/:;u#r_zy-VEoh>S /Fm*(SJ/8Lhsf@HQm2 q}Ċk0 e^1Rt{CDrY#!Ԇ%62nu4c5EThRTd&UY=G00-as';6{\Q` =abxRr͙y:: |9wP7c#8E'‰!: #rq)IaRSdx]jQ:J2^_+F>.'D^}a8kn1")LjrMbQ{CdJT4a'!zb bx*!$4(9 IGl=Tr#8@EwZ3y|՚ Z.P)w4 Q*ՌPʉPG=(!*Cpl\*e7Q=KUOya;娔l\XK"3JKRc,p2LhU$BF<[ScY8^XsJ3+,o1LXH!`4&IbK\Y_lτ5~dKp`hPE]ۭ! J{]]^HVi!{79ܱng?W &oYW-uD6w3>iL½YQz.l{KCI0l4`o<Ϟ^M'Tno6V5Zk! ^d{ pҪ c*E}o3Iˈ[OO>IƂ}ag^{|f[a7Txו_ugݺ`7zX嫭-96+|&fK<'\''W]@̝J149K{H-d;ۙ6Ew>p^/2[)B μ<`nML_@_b6`2ǿ R^Xκ#QM{gZ:4V\st< a w,﶑kwQhyAu{jŠMXoWk`4LD KAVmRPz1gVkgB]]!Mw] £U'}(&*0Q+M|_78= ?Ȩ//SDJ!#<]>4qٻ&10kV%/J[e'fGIY4;=\x]e]2-7#2S)_;<},k+11.-Q)ڈ)J 7) Oe?"aœbyVx&9|#hf,O4UQ{mCaDa@7)kCU_,;F25Za*)$ر9tvs@Ih`,xFS,"N̤rU >]nkB3DXsZoKt|yvY׏m]bβ_zw/o\`Y&zL6ut^Ge1y4wL\?3VPY6 @A=ŁKuqɬGB|)+"*dֈs,"'#x@<`l4h!0D, GJo) R㔎 "q2 񘻠 6KrAP)|ΘC>ipDHӮ8LT-O|MTQ*CkGޭ_U:kؠ#;CGsbQ/W$8E8!ǨE\nF#BptXp#`i`Ĥʡ\FX=ˈJP&>HݛF3bj+0Ơ-(zMP8=nm\'49|˪7AA-*(>V97XQu 9qHYwӉu~~+n݁Ǹkꞃ @yT5c{keijjUuȧ31j;*gJCqA  AQͼߍkW >͎Ӽ{Տ/Uh2 lհ4q}-, MZoҤ nfY@$jDkkWRBq)q~BhU|&RKgK0},ޕgHZF{t)! A*k/.Oz1"$1ibCDkɀH3+Su(kFYrPt}ggW tm/3z!6̓~tS`j J,L-3G`H{8X8XȐwնUg4 UґK3jﭣTW+Ayk:9"&)dM+lNV u,ĊAH"[ c5ɋ Y{W7#XR?ʌ[ś݌|qy*N+mZM7˫U^/P> -N6\\?[ DXljE0XJ!f!힜_$ױK](`&],.w6G!:\ + > Fa/~!Jq,[p/zUjVFkSr&h*6HB#$ZqbnoG޷宪Ma #A6:߽c{n+N(ws}kWj /vWvg3hYUY&OۢnNAعqY{_wƝl*M2Sbn'ףiу=^;zXv/8昧?z7g^j}~r滃p3o~T..9˚yk=='^-tI=OcGW=t=_p;6P}{13g^(FiY7y-8.9sNmUmTɅri8C FRLnL8wГw8=S. Y\0 W(%YS@LS7c+kF$gl^,d2z2 =hs8cYe : O4-V~ )9jvQʺ5gvw%Y d̉k#U?oPͲ Eu߀1o.b =Q;Lc1B/nc]7*H4P:1RkR! X@.Ť3#51RTL]0R&'ʱF 4I % *E6 FTEoOh[[3_lcEqs}"G] jc7S!d 6!xHFTCH#[/B d% Ք1#"\AO帛L)5m O Zc lt$'MƗ|UU{fg06L#hZ1 Zcp-Nr0mz\6ۥ;RF JQ sm?mFxJRiH--dJYg_$'>-I*IsI -21`*6䂖lTnDrTtԗA`\$CxHPh4r"WVgIytQםqj7qƩRˋaN(zhU}RhFS肗]tE'Քୋ9D,\<@ !ݜV.eceq$ґ2A#A4ɇ)1]a@C``**9NgЧdXJauT96$VWT}mSy2!`S|KCM7.*G_6kV*Z)k_c֚DZ狯J\O5dM1MKʃjTa<66%cUIR9J*Y\UED1!ERT/-Oeu%9"+-E6O-ʚ;cIޯ jOWoݗ/GQVu5#vCI9`!WkMTNȩqWo V5k8]CԐl֑J^GI|EuQVrlH} ,بhd .g9D(A"eyvށ(rEgkO)hˏ^ӳJ2y r'|x?&/YS4J1m5sX^C>,?~iaΘXMWJѕt1Al<#=%jII[L.8‚!;L.k%GϟvQ]ōJf!N:y[bse$!&XSMl/";LyB3FNѴTf?y8u+9@gٝױ>ӫݪ$+~0X`Y`r#%>,DҒ ,%DYp%ql^z++Yj qWҚJqW X P.Ӿ J+Pi j Dp)'DR+Tk4p\!  K6\ P`\5WVhy`eW\`-ʵ\ZEP% z%T1=>&RS~O<3X['UmWUZPUU\- Tf}Bx++/BJ+TUUqŔ\y+lʵ\ZEPe0ƀ U-:˕騟/ -6,6#D77'u%c9XKn鬗ޟ:/g̲tpr,bIQx65uPD8wC}wh\~e7F&4Ω"%6ȎLN*XGf'+ "a劤YYwжKbhhX$V6ɺ1#&,eB,+ =e7awedW夝לł,Z85xUnH:y))(-+ɶTKAo[!G 6ܛHj%%GrJ"FrBsCGBG/\\#}ĸ+T8yq%:83opr7=q*uXD\)m# +Pf=q*5%W ĕ6LHA,W(W_pjʩ W•1BZ@1P *:PeXH\Y [A. :P cWWiX•99j DWSN@N rG›W(W1_pjuWR&Y\a@ * P T*+rU:L0[w}m |68K]ӧT5*Zd(dM'U ^嚪nfbݵXI8DgF[v{qN)n/ADĞ _kIVVKjۂgS\P.վ T)i7H8)AƟ0Z @ ΏʠJF+E$#\)^*ʕxWg\+PH𮚈+Mp * P% T:$(2S0- 3vjs^R9&RfOVopr7/8)^ XEӳ68`+N ֓{Ԟj쪞J.ha5p5=FZ@pqr5! ՞*R1WQP.׾ J: 6ઁℓc=:8LA: aYZ#xI!KFoJQ r!Nc%5"Ċwi,:mDgiBrC::1Zѷ {Jv|7ܛh:'۸()'S=Z$y(ZiRkSN!w.jbV95OK|u~<}V._2((p{XA/:X4*V7&->ыz:,ڜ uMo =@+ޗ,V5ary¶=E=< ~iTiEoW<@ +#O#76K?殺T89' K[fG7¶V ѷ٢s{с~׎.]BUWH.%.}Σ5'z;[lT4+G٤!:;/eh.%gyԇ6T|.IR?Sw1 c8O6&.ИJf*6~Zxth@tN3Ę*~y`Y偎Zik 1S :ZV b91PonpC}Zˀ=Zˀr7V;?ڎ* Mm猖nMqSb< Ѷ[QTϷ ram'^)ԍxʲ/fU1ͳ*G!G{;!*i>;}8hvfSt`,}qhɃd\q W<۰n+Q:4,ߟTogUxCOYXl\[\Y|Ū M +d=(h4Ӥ>?T^{*Q:{iل}8p,ҭG1h\^6F*+߸*/`m[|ﵱGCO 'ȒY2X>$Γjrc[_<׾j?VAʽ* 5X8~TyŞ\R']m,joɓY~KG>rX_8(Ay-C hۏ,ZFWe(?km2,Wm-c䢑T|=Ӑe@lC\2j5s}~ UڰkS"G?\*} URG 6\\K}D+TIUUqSy+], }u?D6mD\MW(Wy+T{ pv5ઁ0S0+/Bs@%'WWyHZ<λzrwRkɉpUOkWۚJWTr\Z]42ઁbR!<f * PU T]5Wqȁ vs pys:5?}JUZ^pNGU1\..#f&]ۉYDJ\5WRjGkoprWTꩴz=M/ۄ O^-䧿ԓ{_'Z^Otlß+p)UpWqu 2:P%WW V(pV W(_pjPdW g0-*nxLX-u*1§[N?QbqRw'\W0O6WV+!߸zgR(0? WQ Q-cUvxQ0-\x+| 7x_BʛAdT{SXvpu\)n # kQ R 2!n"pzO\͊"P{uw\JF+#7#\`A7BL+T+B2Mĕ*+̸? ʕ̈Z$ \Wm"qgU-Fzvuƃz*c U \mMO%WFx+lW(CZb]d:ઁbRY`oprWV q* j 8㤴u"eX_.k/j7ꨊ[‰5ȣkc;KY{On?O!"s|.Et|O>+i*OEh:Yi6ff0j-JWET'yAAd4ϒvldVyC'˻]gS>I7͗Y*vnd(ӯ׳ICΟ;Xp l㋧lAc| s*ټt4~(.7x꧟o}8Je]z][>*?1-^rɲ+RwV9/;тb2]Aϻ>D7oZ)!w7Mog(_Cxi[GhBUyd %m*ҊJf_,WB-VRI8NSI6~ bwk鳧+2H8{"A鑪5IKtM))(kfB#[WZݡ&1AKQjRo~dߣRŒҖ=AΤ3*tU<R)"Q*MI?@D/)j.y9 %[9B- f*RYPX+~wXY L!=ne=~nwBm26I>'w7|"7YIî$BO$AܥC|b#y`bwV?G^Wb7ER=iqˡHuo 9kc5[nI)␶R)hU\WM%{茹RƲ?yW?fwjyp$!A∢j;" rBf8r ~1%! cw4Ip#Pt#+%FҮ3|rpCQ2)H4#@~dB6b&R85\M-%2$NSpqЬUo+}}(9xTBT:z8c̫)]x2{g) LqX0.ڸ\jreӸ7s(a PZE9RyI`1 ,yZ~hޙSjRnL)`NL%R׆i5šZYgCFQk-խׇRܬ}O?>^GTǷѩ}j,&6꛷e,s@pе" ̱A]JeX2 )rp\pBbJi!e ]Utfg܌c _S=f,kzѕqեDA4d8LƳэu15՛ak %a:E6QyV'\ ^7muCi,,w(#v(: Y4!{rIr-5Hh#MpIK4A7A`TNB܀';-߇| R̻(8~~0hl9+j dwuuY*$ ڹ.L^ 򒚹p藟^!RbqVl Y]Ɖ6LJ } V#ˈNLR'i,'L̓T T0 ) 8$aj)6JH2"2itKYZʤ&:_IԥxRfJZm,>6]nག9BsڂPDrT /"vzGTB$GL?hR9C G馲ܗ17e0d,xä]$K'fKQR!8@XxowɲjIsV7`BpQwҘ*G6;9Az{V۱[֢M켛׃ke,%B%I]vZh֚g týq…}\ŲdDc w!룛"df7gXU??jY\!2d" <޼fr:o֊;2,K&+\\`Lܤ,YIp dOOdYL0d~FC/~sVbZ 9 DBޔr#˒ !#q2!i6$YCKӲaoqVSC~sV˲7DL{,+  D.jްv'!`_jZKIy M59AJ gD ;= [V?],mBu1-:cTP,˅*\c/~G1TVT; Y!,CD],I8eA@8 Y-+Pu$ˤ0O5I]L >rV'!#RᬖHVqV?PzYeaUӀ^KҲ ޴5C0xѹIV Yy rb?\b `!ojYY뗐ز|P '5 {]TV!mM}/GLi/ 4yXaSΆ.}wH c\J'R<&,Orpbrs+\@[\4vB-Qfl& Wt*kFLiת9R&c.XIE2!HaRI"@!(㒁*[2Qr&<l5t5'PD(VsN]t⾸/kTǷѩL}r["oGj&D}zn3cxqBH  cRS,a)Vt.O)IJ(B@j@,ʎVŪ~ިq?DٹfMl+-ں{B;XtMKQW`޸Dkɼ 1(]"'!pQ'%I qSY!Mlܲ[=.әq': ^q'˂M̪y݀I_zbzE +O梽^W,a}t.y-1?P ѥ0ئ02Ilrlʈ-) uҏkCgOF;L*b׵4^+) )QHGH"6}&SYni,;KIe?Er.K7!N HG,qҨxز6ʲB@qX\vTe8'f=$I{~yhP(AG+(ju׀{*_nilJyzt+쟭pD2B-_?ꁽZR,,--(2K,/$ђ@Eg(%+i"J,K)o|o+WC=@,0aS*o"ټ6 ܍jqE[ 85kf拍7z~Qn Ji [zp}Oll4xHǟ﹘-J Jd9, ƽ0wrܡca=GܴҗdyKeRj޼s&5Ea%Q%q^qk.zJ Q!oHd$煑M!rcԵ'!@Me *!OKcN`ž0c'\u#UvȠd D) p>R~'!㐓soٷT}'>g4bYEL A_h8؃΃e )Eq$4UꋴLS}Y'@DI_V2I [VW$ xɚW,VKGR^ |N q#(Nj ; Nobm?!R޴<>˚IR wlIs2&0J{M݄D{vcYX^!!3OEȐAo6$NB86$YZ]uJS$Ըᢨl?!ԲGm M{9 ^SV'q:J6i2@- (2pH|Բl&Bmobȁ/>*\!RF!+da 8) )=9-hu+<'`T;alpqh fӢ|ba*>x'鎙{xX'r:L?򈆍uoǑGdy+ LY]] ymgW+GRm݌K*)N&#(l&Yv1to5~haս'q>q* 3 )ls6E&\&AAd,R^8S ej1D)@UY۷_m*D`6;VW(~6ƔI DϜw8H_|p3$,撒=\Oo[TbuABvYwC1& Kuwy1m_t(+%"gB-FĝV]ʖoæ6eJ?+_ijJ?Ne\Y#h6 z`EW9|P 싒CNtBӢAo~qp5"r5&hk>-#W_V4pP7f|@驺o5?9Hx7PGԨrE4X hLuXKnͷb]x;wŚ.- pF"Pj[-.Vk=/D1>yrx6KZz%QV¬Sk}-Dc!=n'%g]W{-uA >NILł˒eB* w{S+ݓ@IRЌ2eJݧZuߍa矋N/Qrt)ﮋEd}Qhd6c Pϻ*֗Uad6*E>d$+QYʘD$4N bF!J@8Hx3^]1̓-)"鄵'g_We,ONS丌i DQw?c!/+Ϭ):nSJaHr3^ɣ(sb]+dvr[ T%# jq-()VRh3zFEݍ͡3;}p" wG rKȬoASst}_*B4JRW6.F;xSr4d! }jUhWֽΓtZZ xz>YX!0yj=6sU)KYҌ4y+k&!"b͔F"u^؁Z(7f;1}:z;O<(xVJAڡob)SU1Qи5:U}(6$̺G)W7cGc ip}I&CCt^nycJ yHg*nQqmm!3<ٻG5Qf"p+ɒRR)O7 st$=?4\yFa,qu 0haj7ݼؖ님)-w%ˍhUƾ;bs>NLi;-ɲ(/oPlH"%PX$Qrf"PBHsI-':i+P冷 hQsvq f$E b:T# 9E\&-9m]a!r˳PY.~ǎrY2i)5h9]đL-r eMDi;~_{<. 8-HqN Z*831 YPY_JJ3]$_[aCļd! `3+ǰ& !7uVRSw'X}H)A\qO |:[KƼ'$ȝۮ𻂡d^@uAG>g%>Hu{nAƸ'P<IoD\Vs,A!O,0̦X>j/ՠם>T)*2][;G,Eud\~*'$Nӛ'X$* z ;\'0y/ Do8d8'p(zk&ļ#϶Z~ڽ1rAt ;mkfNϋJ%m/~ Bq[HP>ݟqk%U|[p i_5#rĘ<]Q+csuRANVMᅭ7LIJKJI4tsr(rR`qǀB35XK$E"&Z@H2B>1>0RW'8^A %*< T%UnfN^@PJ30M3 TRLU.( D .#p-e UYUP+5A ^G`fNS`}:F ]2=P;(LR/⡒j Ph%E`A=;X6;o[Q*0dFw嗕&Ui I.Da=6ֹA}2.d <8Xc/E+\XDf0ns:"̆ؕ'D"(*!PܔumTAL{9lrp$Q^J-B_+9+Q#(V<a5T}՝hg!HdgZg^O] 9/;珇0VRrqYpu82AgO1tc_ϯLZ̤87-'ct79r'`.F4u eNyAU ,tE[Ci>4-?[e+|^WT]%̸fVO+z*J(Idޭ\ScNEZW\u4;[[To. #%`F"cQ & +U]׌VLBZʲ_*fUήq T$uQфX ] 9 5kIJҪg5}ycy~M ,;}c]StwKR8f(B4H23{6N&_Qsb_~, h1 &a˓qGq"BԿ:߰W}K.q(I5ĭ8q+Ůˏ1@8@{_^,FWĬgY3]Ebvs=)̟'cDwg9eח4 yBu0שvuCR7\1ǒ]vOVs9DM_~YPtzx>x.8ڵlTqj1uX F#3VS~x:K6S9g8pTvx#HM،"9T1*go&a1 "Wz4NAY!rܖɸ.^'aPx^c1 22|i\<ʾFunۙ3ȕ0JԢeGȸN9jnK' ^{_ͫeԕM?hw?ޝWϑS/H|콾 AE78 =bPq.یIcRM3k?N4 &L$<TO$yЉ_(|c]I&h5+>LN.'P@A$a{!)'" p$eBPpP,B|82' |W&!! ak6:vyR_ύ`!͕Q?Ȧz=2<ƴJĿ±G]GK%>K/"}2^`v3aR|6xb$fNMfsE1IaLiL?XmL0K„15ɱfaNLf݃[p3Q[-"ŽΥ.|R՝kRH+%bb梓4JHO9.8o%Pv:"@*<}iBf2%vsBd,;>az*xGD d)ZSa tde^Omȗ0fe,AaBbFgO?gduL`Xl &%900J x@ȒIIImkX%sV}NS90 Yj5W5PC#gG_ee,&ώӞ*T >>žΠS`u :yG;4 %|ҽ|hThO "I &%A0*qAggB`|Cί !gdB`o)0Nb:o~|=8gKi"o h rhXD)A,OEΛ5gQ;O/B35D\ė gə3A,I՝4iɔB[ FE"%,#Nc,DkLȳlB[']U%[ bf/4H@KJ*K pmUhoTL!V&<.VTs6)د3؞wCiTU LIJKJ[ 04ZFh-V?z[aqbbxxn-CX;.$r:;s9zQLn:$fx={ /hdzsp;BT!>+U΅h!9{E+swU#$':G|xtzMA!y &֮(;_(ӰV0@7_h{_5./+гǖ<͂8āT[$^8JIʚQygqwᵠ)?ٿ.YSFۏ)+dטPXmk `GjLcCyJCy%a9Z ,y^.>j8ϤWfgr,C<#AǼ7;I~,!Y~-J(g>kXK{ux5IX9d rqR6xQEu_,AHR*E^m l0ۤhs[aS.k7g>~a\n7f@ݳǯٽ2FRc/6_XPdBݷ33n0~VI;[ւ+̯-׸)g1$A|eԨ#lݥ] Kvdf|V[ՒvwW:,J"H4t{\ナ\{`RM _(R%N[|2x7{f= ,TV/ :Ūv7[֬([f0/- 2X.$<5 )5,-"E ,o5}X%32IWGYoy>?~o?-E`eOLwG`}SP [Pi5~@jބ;75o cȮ@Ȥdͅ1]>\; ڎeDEƑzFɤ\?%,DmEXx-v446m5X-tfI [~\cRUFf2tk |B^r53t8c.|QL  N#(H$Jb)W27 ?/.l>FAז ̊>W860(ɍkJ88T[b.![P Vhoz(སtS%Jf)Mڟg }c*Ha' ]o:9=v 8m/ۣΪu޷rn{w;vFz pz<IiwV^7:yibf8T,UQJYi7TH#06pd1|V+UаU/v„]E5dXT*\Lna"Q;=͞Ԓ!\R6!O/0mqn%[hysx6\*QD/L;Ë/ Qa;'ӂ10`e]9i?Noew|л@f{4(ok񯗳 q0Ժ5-&y.!/[bɎȆqBQ$E>Ycm1)cdCcm)oJ3pkFlv h߯5ófNO{~GGa "#^ƻɟggvg}[ڻ.э^dVt\mTw+f ȞJ $6lZ ^XirC CU;G)w6Bf7 3OjxlEt̸WK ձ"ұ:o>Ǹ _(($\\Wj?Pl3R)n e5PPux2-)ͥUOx7z{%O<&A0m"_M(%f=-r"+.}朻yj<̺aML<RQˊA)+Lo:2QdpmZU.7'q- ɼP-h%uhO7MqC $qܐ(7_ ..i'np8ߢsӡ)l/۾NjXI*:\Xc=L}wXd;p8m0J9O>,,FlݪڌGe`W*hmbݨ\Y*wa=d~)"YVګSʍ~sq1ٛf)0ϠcvC!1q]Xuzagu~p3 g(`\;c٧@bmLi)* @ept!o&q(P.6Qrk h`Lhr)_bT^Z P%u^)g7Lc׫N7̟@b369ͻ/!TIlJ#v`E)=%T}gѶ ;:R:d:?nn_-D⊘f;+!O+omwz -x3`; UȰBY4c/Lg7YA5Xd]&_~xPp"P{#8_7_"y ԟ R,yn>p\[},\aW d 2yb׎r₉ӌ q064JV<[o+bA^uEgaag^~Yc U[2׺Jpq=O_&\ѶPb3Z3V7XG:B61gǣPQ.N$PZ+q$Jc;q yzOrܳZn+Y9NMz3#PU4HpF^BN8e -MjmjuJ@Գ Ǻ#H7VsGccW6)"yPBJ9vD 쉓2𾆘1TA}/g}^+ d{a@g𹽗^T3s{@@ ;D j;ԇ:+8訢IX뼇/q&W$(>{7הw\g'S !E,♿xlMg7q_B:8G9V~YHd1(`c4`P* ,# ,ʾ)r+WX^iZrW*dO`& -ұR9N%NR$cNjAuRrI!3фo%8RUsEyqX ^Z dZu8,bOe(# ieΉc*F&,2H(&J+)НrYl <(:F;Fg:tt`@7G1.~v] 0Y[MQB0KL|1FѾ#/NJk419d^8mpZW {s^ oՕ"yq pb4Q\q cǼj>JaqƗ{V/a]Z})N9f޳U9tn- \`oPR }NMTF꠰]gi|-j4f}%8Kƴ|3^To]!EKZM"<=MSFq7{E/a=-Mt-@B 0Y q5W"ï1NCWc#f3upF+/ rĩR!ք'.T#c/vuNey<WQmg3ϨRp9}x&QR_*!T;=M*)FwM4f @(~p3k# 9;Q# !jRB)GNE xHQ@k80 +jdl(J4>&<ޒR8 +ug}v]&F15gO ^ j4CPh_Oas?^5_g&qQ5k%cq9p΁p!iw5AV, <( 8"N?-^rO'xbBʃݻs@`BMyPm7^YA;AbݞcfuHh :_ΰFpf!.A)Drb]gU 4:-G¤>X{e'ez/ ear+Ӝ'0kq/>β3kd|G@魃TnViuF: ,DY<(<*r!,.V2v67>e2=Jv#NdžbIv>H DjҬ+3ieד ~DPv)]s+;_^; 9L [2yH Qfc9 ̑ /-V V:sy_%6wYYl+l8&Oqiv$c9X#8їn^gX/AqY~~%&iu;m_z/ojsՠlEvm0+{QZSv+C}Ag~@Y7ý4}M?}m`@A cFDe1+I` ƕLvzfH 8nAK}IGC , (\ҸN^x_NT]pmct"~5YEX޿}g͆5^Ziz@T4utf>gl/?`O#XtF(r7:N pXqOe^UG>nboNoaiJ8,j(t 027ZckOVUX뚘Dh[3-f,K cY:2H6ggCqF6g9'_ 9hQ֨6 YWA9vY80Mۘk0۱@3}AK/fMl =+̓LU-ԀgF2JUt5̒1beM5<^Efj*-/zйhՑU@ b$7p+jd6ɠHP3iGv{C؅g`W5 ~bdIt#58'ȩM?zd'?,LjQº@nGRv͔ヌŔ;] '0f *ܮ|&cؕkG[]dlQPBǾIj16Y1|& ooub9߽Z~毶\zøev2+loXeZķ6Q#w7F2J]>gx?η[}j{NfndS/oh_'DiGZ T%AraW J4U*,PrR}][lXEnlV:ٗ8YPݣL/k) H5M 'C'm AJ%gw9b 8be Uϧ_R+gڐ|^ _6'}i/OtskZ>l~>nbfVܾ`@D$7d%Y|Mymz=tXk lѢGtÓnC0xvoA~5t<4e],{ib k<^k07'ݧv4:EӨV>88#džJoE 8n.!lJI;/ـ8]P&4Sw|cjt*6I=AUJ7`0s'|ؘ/HZ }t0(tJbi1EloﺵnVpUedKܦ6o|q~xL׿8doɋB,hx5[XdfKN?Yx!%8Ǖ!^aM5 ehKhn{+2ʮDh6eZEh7#p*vBB px츞V->fKp#E$erCv*$7aK/I.^mΙ,i{.:*ػ{8ZGѤMD43pF|Jm0OurWRdHҮN֨B/(T~s, v\dLTZ kDe^X4P#`>`J/M޷e y}s1K`:|ȃ;!Ѯp_Տv⬃,8-w9Kҕ߻7{{6At \(Ģ_{u׏?[Z>%18 <aVDDdG-h  +k}9*Y'Gubħyg Lbv:# yĪE+BCFa4$[D2eD*iJy*"b]ּ0eQh**?_챲5{9ٻE+{gFy׻gyvrq~zqf"cRRҾw‰Cюs*bhG a<7u47z kB\wx.τO+_Ge5Y:QdQh]al7Ш2%h*1 Ixme1zPChKbT^%_2Ӱۄ }\r7 ]H2wlf[m@AU"`NEKZJpm)6bIY!KH6jգм"om=0ŐDN\YXB-3[[`f2tƨ4uv7klؙOdTϗk2sb|NnC)ݧ42mmV=^ lZF|9; Zc_ͭzVQ̀sKE^E sP:(K:ȼY- (l5Hr'!C$$"RH+!8z&]E1tAUC(0K&KȦpT; 2U3ڬZh@V6dZƏg>?0iAg׻&}HeO' |[7WK\.ZOX7Wݒe !W-_&yǟng&*nx_#??]9|}}vX9 j޻4Bׄ\sϊ"ڻ&6{Jk!_h?&i/P4yUkU9&ٽ ZQHw" "c碇.]T)TRGJg->-;F|{c 0Acڬ lEYBi,eudl)d: @I.Epe4) K-+.c#^aW@(v1nE,]V.q&|:h ^^9l6AEYZ ~u !SQAX\CH R\}.Q(6C腿}(Na!6Y^VH$ARdkр0¸B eZ!ɨQHg&R.+Q)|buEwqE?,#B^n {G@ bUXu V}9i=5H Dsdy..f HQe\,NU)lY.cmiM3 w:z67gG+֖ `IӪB'hչ$vl +T2Ax/C3 br׭]mj+oW[yV>:,*]ƄM%)g`,۵ljS֢"CNh48|(&H%ظa_⤵L9)I5eީvFPGbؖV A @,S^ e/$3F~O롒/V 5[#UG{I]CXOž{GљoG݂,$i@IR-E^)ך"ghXo1;!DA.cK"bP$ýK”`_;jd](,m[ 1\|3mX|l^*zFd'YcsnDp1$Tz#Rm7# `;X4*kMѶHY~#[Tl6"k DuDnKBҒFLH_+]ia@iPSܥkcE/zK[l1 g i \m1cSHi!36妞U: fƲ(r5ű9qk08`6Z#{\?-:p~Xr ~KdLDwoΐΉ]O{fs\>[^R.Sbqp߯ yt(+ ==>̀?#qB8e9vR"{_Lw* ԜCU52m^<7St03j4x_='ud2LmJ`H&gh oM.uſ'K:/ǃ9ß\X?_^G5ՃrzE}KIg?u:懑 e?Oqd^jYkt{ej.PY=MBޞ-r=\E$?/aH,/?u*vι )h6ci 牑 {ڤjKoJԩ}wлb3rZl|Bs'/p&OhziL93n,?vPPzuL/_%rO ruX+%ӛg\vvY{ɠIy%e|3s10<]z6Be.Ba .LG Nn?ǩwexK$?,ThC`psUGFʤS䗥áPxzTFh^g&xGtʑjy 10(:> [ٗYԵ9|{U-|{]{5Ҁ>}*,>8LH9]5?Mp"w3(~>CsPωEy'OϑH >[* ӈ75yKw<Rvr͵(]N 0@bH6J-B  zjJ9ꩁ MM ^|—#YcP>†׳WϞghkNϽG+-dTw\e顢XIy#͔$Cnr=ݓԂlr|1FƼȉcґ w0QBg&l?.Aw@i.#SndY] PؚBmnj&z]ia !5&[ 9ۤ!b-R^h )g/p@ޟ;J0Z;嶃~}o)U 1ȝJ+uԪxˣ.Q N [LV{셦cAtΩ[ʧMQ!.llM: r$/z>z$[ʖr a4E"c [̭!)c宾WO^hjrf~ >NOź#B Lz3y4D¢#?ۯT`$o$ ][ԗ( ߒmLPr>GP8 # d-o[a>߫?}kϹ]zȡ:ocYs)JHwWoK+|C.ѦGy!]Qe0.EZ}>{)}>{{DUw$G.v?@5ru3R蜽(-sRPvi*}YJ"DέOQ\{x֓a%=˷$Q,x/W"E(zsˍ*| UuD"Uؾ~BCTd& o㛧7jeR;}Sc}bjۘ` }L[Mj6!xVبݷFiv_8~K.[a^@GQ}ݰf&  zmѮiĽ %VC+XF7ʚb](CpcE4mM_Mɧ]htC[/rs!eLS6K 2mG9y۷eu(kTc; œݪ޾A,0L!WuTzl:\Q R;y|5FիmQ/ؔJRKpWֺ!P'VџZnEMe\1LqNj1@3%pRo6.mF1l3aQ ,a=W>z¦oT{r砯c щ:eQ|ۗ)]i̫VUɍZwJA/[t%B'k@3c@g?aQmP?^$osmAcyî1-IT7j&gMUd05UN%%|}:)WPe;o!Tj)l+JyadXP4Qu\7@Rͥd㽧SРƨj"PkjmV7JQPlT? *Zk%^WLmpC:]snQĦ1 ELl'U8LW,Fܤ|H=?5ЈR])A (n諲kPUşM/icfT3Q},Zfl [C[ igE!CrW߯~%HNƬALKrhުZz"}SV%j%31C:o|cX|ă8tHcV[WΆj}vH( Tg0a[=c>:gb[phSXsJer#hA7QJxC*G88>&BW; 8ٕ!S((8m96V#ǭ2/鷆)m趜-(Qdn;ۼD pU^_&k>R /Dr!te']\^Hy>]C .ruyzg7 V ߅%ֽ\8w Jj) \^mH~h֧UrB=RgߢKK)[xI;0kJXz2$u^U&4 jZCh">K)Nt)V.s=G/f&|j n=adF#a] ٓfF=A2s?"`⠱P ŗJ"s?*̇˨UMB3jQЌfYpJ77U?h~j̱JU||.X;V)=c}c3Dc'+3)XeUB'5&`:%Yd'60$㪅%8;R uhTB CK\LX!NCwuqH}AEV4 2bXˉ{(rՒt[-Ėn}YdU" dbFb y0UX1V1e*#?AMKJ9 _Z)sH6bnѳUBǼqt*Oad97yÄ pm?c;ްAƉc58Y[tNuc<;>/~2 (M쀑ZGs>J>{Xm._%Q a.mmjngAR^*~sVw.)~l-[Q-lŸ?۷ \`C e5KyTQAɫsyfaa'+lLc(1\k= eZ롫ɻ|]1Ky[jYvβ5 EůLݠ_<MA4,YU/yO5GyR` #vU-Tj"[- />sӡ!Lq/orO%GL0>Y2= {p#R-R-Ҫ+P,쩩Q;*қk D.%+{R^Lil*6Q<ϤN=wK3:C l* ėk#H[fY"<׵o5^j#;;Jc[b037MɌ|s.TvJ0e-@>85jMΚɴJ%.ڂQՈNZ"#w)>!Үs&1k9\ FM#T)3~yǛH|-$Ψ;fۧkK<s!>K(5|Bp,W3-)˱,+cRU0r=ٞH3ڵ\K'C1 %\ԙ#yf<վ.zx(ZnV37C$v2&HPD7{Yݬ>d4 NwiyN;#M]nIv7}{4h6hw̝l(hڠڷV{v1,< LjY!6hwKDh6mctI8z[A oҶkC4#&< oB;^hMP- Ў;E7hw&9Rb\!}]6]NgfyEڼ4Ӱ6hw雇nv4lnA)Yh n}[@{v4/=]yE;Z!o<dnnvכ>ڠ_ f#ڠݞӔb"U3 m}cKtv9%8:k0belu۠ 3)Yaʄ5s~e' `>}c?q\ԟ^|u޺B~ ̿ʼ!yW)~v񪟴C9wu*_S(>|&7nXӬYBrw]ōNhbuE fSu cjϽߖ`깷:oa:6z 䅅T1] A~waG=:h߼N0ާCpѣ]oimv2Af I9UV@O6hA }.KY)yλ۠Mo(}rcG\Ury)5߻)%{(Z:kú.p|ގ^U|&rPE,r./yȡ$; py7挾d\wT!U"ۏ>frFk)wK6r?X-,FhRS-vk1۝ ܘG1o}gܓS70j:uHVslՌb@{劶!^vxLYGbV84^[]KndMr-:{Ofn_kLcVkb>eedZ`!-&皲_V34.Ѓsv:<@ L2Mb/sV/r;KODaa*QH=7u"36 6ӥ[2l7:"SԬ!;7 vmB1a`9B-Z*TRr#=pZKYШm ͬ.R[]!Κ[Oq!o(.cy-m.e׬-JYy L/@YS1F=XZd)ڤG9jCt;])<+wYd1lϛ<xfC[`XAf ķ,M{w<ݩz^;2 y"} z2]D[)ۜenm4zZKb3Zg<^sȠyKbH ,!AXU<uAML"n8o. z܉pdՂԆUft;pZKK*ݧ`O8ʰ_%971% 2q`.d.g`o*g.e 1 P)\yTȭ w3z cݭmO[}#N=R`K l-/^ poL wþt_k^7B RLz)c^SLUz#5XB0F#YKt>sj>ʈ;Cչ1*٢2Q13^sL%鏟1}}z8u0OR kxw#F(R%B-:Z4Yãsۚ\ () PWٍaB+) g0ѵXcL[o)isFg-]yB"Rw$xM~itؤx炓pg6y!}F塚ͨGdkNߋ4$?I+T1wF¡eSbiHCW;wM mb;xuZse<`)L`O8}<*Hn+& m \5îM]-x!>rܛjr *jaF"F(@}U(x{xXN|ƒLq:L*\:^L1Al;SmD:WP!Pn&`k0TjO|a]"ۢOfu`'{Og<@׽awKhm*} L|eO-M˗%@#_略\͍BuJ]}K`}r| -zo`pzk@M䫓ϥXV3u_@еH\$WE⥖}e_Jb艂QdC!Q]_7usדR ұU8iޮ'>訡F&5%ȺN|M <ڂF<[]Nys]?(;M!8ޏ#u%8GJ-/Dɵ S`ee_G|19hP+WS γ{'x0cp]Py9MaGq 7djOITL }K9Bny8n,%}FH%i)wbQP,rL]~q]!%Kr%r=ۻ S'8lumr4\};$Yy&FLb,;J78ا>$ƘBn*80( )<_]nQ3gbJ ۉA 2! ~~|EJB> k1nҝ>s>JV~u"Vd-(tI$~Wx8]ڕrZ HNqL2PbcClk<bmLjl~we!~ 8Zp 2?>^nv)N/^s)VjVNCXpd{+d.ƣ'ï-MBu7V̆PhV,6F{Z@$iWmj!^H=9R__,X B;i|^sq(-x7Oj6RW`н ty} և˯MQ$q-9md9A 2!0_D"߀ 0X[L4r\!‹Ze՛k$(6J , -O H9wNYW6͌LWjDD>wyNPuJf:m=mr7S_ snql,8:R:{;ϵ Ksz~ ;ў|OiӣLbIcי!h0 ǦWxQm$jmkAn6S8|%*\o5 [h{`|CN=k8FCl t7W_Z~K4Y[y5dY] ޿{_gy/#J5j!^,EN`L׶pD{\o)`v˚nk!x(XSA*kdɚbp5֞8x<5Ict9atwC^Q tH )ZMn-œ(&덀U5Fr甪b}# 0sPS;OCH]:PuPMtOK͗NKscnjX<ݿSͥZq{~}Ev02V Zp,;R2NFG$:RudXSVl @M|hr^xtKUOt¼XJ!b>U!p50\nVCDxeGu#񈔚xVwFqUaϾ&;i:jbtFBh[% W+J6/M>V&w)po?]\_/T7s \G1Wl,&cYZؓ'R+2&VS V%2s{чv|.f_wL,7#;WUԇ' S=Чua>=(>| "7*xے6,n/0^_Ed}A[{w~׽!G[%9(饰T8G3rBݘ<cC/ /}49 .`wvӉϗF& *}HqOlg;N ݟAW? еfHwlhN>L, W g(VK]\#4ȷƣjcgDf &.9[_>_x΁Kx|hC'By]V걩Q045:6u.79j9mZ 8q?^X1/[t䛃o.k)D'ޑKyeb$i:\Z\O%ߘP},Bx_X}~b6n}0IcNv$sjk>eœV8 <[_r bJ|n,͙+cʂo OaSA!?L."#n11w},Rm"}[Hs:gs#zp @nyW8[dZQ-3Xn-wDny.vpwC^v6xa ւܞSx 2~XW՟Z17,+*qWp7Xr%GbsIJ]݉0'X Da^ӨoN~ƾ!l3SwgB(VSĥ<VT/mTG0Y8pK(. lvϒ텋vg@ys P]vY_ SuJsb[0ZR |RNT3Uͯ}s]Nȵ)Bh>6d& ߮ސ~4+h^F ̝kмR3\z"ݑO_us>ךmOŦ kQ3ʺӗ|"6A*ls rr oD} dziܚ݁lj'9*Hu6vU/tvd|Ƣn؍Upy@U\&>׋^T#e| xG^T(Xʘ9(bK5X>&Z VNowO޴oiWѷa Pfxm>3&hJJ9oSl]S_8OU5S]>7`- zISwCy€{.UaCMߠRaP\ЉzFE|?tOȣjCG , ,ȵ_IUWG`rPS/=5rI8Ԅ'J`+;Z}Ozn.4ݧ7d@%]g߮bu0ٴl4tňVE1obH7Zkb&& kXg0#/0/B10_ 3OZRĪ8& m]R87"Y8;} `((5\aBTA0'*6+"RJ Ś{ 9Q_mqrM@V=Jv@B,RUUhEZDZ]>0A5Q-7jn\ڑ|l.cEDvFCONcjQL\8Efa.2Qa| mŮS;uv}vy)HCvlvFCiӨݖP;um9C Eb qjr='j'b'(2ydΨ|Cl:d\!U@h~"8V@h I0y%HÀv&UHjO! ȣ8Q2ydh(9ю#uc=* 88HtɣUɎ9P< TJ%57NUwک|#jK3$vF@$jHLF\P(΅Ed4jgnR~8S7}v-v bS.?jQviIH,_u (*U0 s^KvCVNj.}v-&i<he->TeU*8v1q?zzQq^4'gU:?վAG?TÜyz #|oj຾88Y[_?%˶O\Gْ9͟;-cgĺ] t(%%<~ ԝCvU39?_#L.WjKb~ 7^ťClsU%Hj-;p.~J3ٹ$gifS+fz߂Z(IZf1/W ɭ6IЇ.hn[:kF*> 2jBS\gvz{9l"ۭ=Fmu_` }ۑ.<ΐ)ϻxl,8ƻ^I@ah<-GQE,:ǪujuT.+9SU7糵D-Q? τ ZxDԏA|V+nՏ1Q̋$xSBS?~@N9/oe74sEdz智rPK<'88>Oo/z {Wquc3*wR]5EUŒfcnyp!g@Cȡ 0/a݌;,uj>'L! 1&_# ai-0+"" iR$?yX0+@h_vB()@L"<@HdΨ|Kqj8Emq ^R F>Ȉݶ;uGLw0rVBro'v"/~]>fQ;v/@`)BBF`N qt, aP3j7oݎSJ)m1g3jgn1/UUvԝ_'][Oۇ ^eQr{T5]h`2)ˏiY/̩J)Q>Joɲ  wȓ]okQ ]**RyUVLe7za!jT|h>=,[iXvGYb?wN?=G}le?~_[|ZfizɱJ[z1óZ;j[u[EW9tٓz^wƼ3uqE{J;? 7s$C?Go>p~sYkˋ_ۃɷ&LoiҝLt3-_ظ;UҠsOiy'PJOYGV9<9ӦO~wUyOوv"ӎo&ʋ՛2ˣz별d!- 9 lJ~Z%z ڛ]d 9F~L؋ؗwÝC@ dXDaC00MABQR EJjBӴn9ȴ{|XUBrY,v;Vxgbکj OšmIǛ,.zYܾz"7E-豲/瑾,e ܕ KJJԝs$/M] rlJj(.njTI\z  }/3dl+R{S:w|s J'_Noybtj\+*d0{p[oR?)[uyi$ 7Mc_C%jM۔uee]t]&eEp9[ޗhNgk'՛9CQDE8L&| )WHKZƴZijebAuB-֬skJhI3TM%viUL;Ac\)w }r*p ZV(jb*b\w]VŦA-'tuKj`֛S-rpuR0 H^`|j!S]el;vbmd}B-j"r\B}5EMW蔙ʮs]e(Qp AOAg *Iف|nTBR@8{z7W {NnK}{xx&]u7CglgAcby6g}|Vy}ﲝQ|Z^-~_} 8yY w_pؿМorH?'始OOԫ]|sz~ ρsǓ?W׿?9;93ӟn N۝BCyfj{㸑_퀴D^Xe`?m{"~I2Ev,y79G/n{4w&S2lȚVw|T|bT܅-ξ' 8ܣ0(?1sxJaC$ }Ğ?C0fh]ꂋŞ)u?F>=ķtn1&K3qqFaϳ:%4ٙ1s4wfn>Ŝ Y2@u218sf7oIBrfw.f0>sJ$qlϩSgSf&rjw .eLDZV`.UNb nœSSvn]ةQIQ;ɔ8cQ]b˩CLat:9 i?δBN й!:p(Q&KhilQy*0*@@<<7W >sN)<*'J9WO)~,3k|[Uw}{O^uOͺ^_>g[۾lC|uvw,WG%9`?$r`VIS'P2Ygk<[3at$%FHCklG۳F7>ꋻ㷹g5Lso-*/e됒ƕ #)²]RogqV8_.>|MK{[Fr/gF6N [䠳gDe ÂoOJzz_/bq-fDAϘ!)yסO&B3@9 u kbvDToZ]kwz}~㾏@ ^ڞPC]" ]7ubIJ3?aLuޯ;P(@Lš +l7Xo됋6W@ P7,s9tC\ESZiArc@h) !Q;ާL55sc}b'U B B̀孳yZ9GboIXy+@։TwDvbN#KNݓ)R=+L`~*({{4"F|}m$RW8#VA0 #+`}E=$1cρ@D>XALcń~s`[OLvB NV`f'uH&vN͈S#v*I\Ck1NaJE٩Q; }p,[+g8 &oϾ{(]-/6e|H~U]*rUGh wuHU!1)l>'~}cW9`Z)$9ο$97!9N(8s섋b<$QJc96C_̔3wi5c)sH'Ȩm7%/5v.$Sv\o,c(ߝ9 }I˳vBhWg[lVa9 Kn42;;Njgn{(SЬC ԩ !+?_u}xb ϺŇOǧNU:}FPxkD`Z,M#$\y&|kך5X<뺬ir9?k@R^BwSB+ {ݩ}4JM;:V 3GH1EG\ãi|<<:HEA)MaY+E<̩I3Uz@$!$65`5yH,@zR{1Vn+frV.w~.n]7{QwZ$.rնCBuCUCVCԴf&#YoƀC/6rۊȞnOg[%/onQ_^[7.+kіNj_1[ϪUeCnRPMy=s(}SoR# 6pَmFɪɳHTǞ%ee:lBmWf.wTfTHG0 sh3dޯ{!(|yW"bF`WÊi}BWi&Zd kcZ iZ6E6J@urh c~@3r+C35cZ-u_ͻ4 ͯ[*{[&*{>\CeO=ʞsR#&I ,Gv #KNܓC)@ћ>a;);a= &! !gՓXĚƷ<Ҡ$q&Ԇ)%)d/g <Tj9r˪{gUFǭ'<}rCgZ(r8z~:,8mLȩ] IQ;3'1O#DvN6C؈;jgHw 3Ek0WqRDqjn ͱS;s#NIiqPIC9} V 3wpr t$)S;vOH;àT Z%$iNʮjn!$/3v)xt `;ZсDGN] Sm)eY ,GG-i8b.c'1O%yxf3Nٍ"N0g p$R O"9ۀo4:e}8pnâ-x}\'>J{lqsf7ﻯU/Y"kڋ{d$8 ^+;57)d-NDg!_'[Fƺ2I?Z*"g#LW F[whCN?^758n_c:4+\)qSD%QQзbz>xz>;˭x(\ZcC4 mʱN0Exg<3WS;ܑ'7k0Dv9DHzlW^BgAq Q zv_;avk*\A]U5TBlgt=JnYQkZM8 L6 uGYjU}ؼLw1\wzC^=--1ԀrnOҷfs}koӷ D~9NoNڙO*kb'vR.sj `EG9(%:S?.:сGј*y0Dx<<ڀo*_#(̋S0Wh %Qݏc3_o<{[s󧞛A_}xw{wiݕq**YLb:O*)F3%{!,,?!sG7ˑsvDG{H !7wHvؕ)ULA듓q(N7~. 9@>)B5qfߊ|+qLZ Qb'UB-4D3~ir}f嚷g=Yz)i\ޯYL\_nf^S7z @*|`FXiSmUH7=z/笽_ Xc1>&Cu<DŽfy0؋BJܑ3xiԛ*fXzא9`ʾi|gP=NV`*!5!(R@vNmM٩qR;Ne2O@^ !%qf 4/SDQRWE,H@USRGgH HCHJmjkY5G+7wo4meIv2A˔%T`LU#cUۘ !KqL 2誇_}\\W[^>UM= dhiIٻ6%Wٌܷ6 fy9ks&EE(߷!9$^'=U]_WU%GEuFA<)Â,OlRdhf֥NƐe":# FֱYH &Y a(B9CJH`d#ߤ2!uԍ,f#8'#1|^m)XYZ}Ԫ?* K>|yq4~MD6$Ma5PIm,>h<23{w)Q-YXݾzt-| as!O;?Y;1~U3OaLE4<`h]|]sLNJ!i= XsRinew7K7Ƞ U'~hZ4XE3}SEJRB3|Y4M>]hҢҐ.pEb+fk[T/w_Q q$7r_$O'| !L”B3ҧ?(2iLٔ b!)`a;^Engٽ[5.;LUN`6'9d= aQ_bN8[PՂ}~UrY>zwCCʯ#_p4(m;>9~>W{†܌#j?BSc/jËӯ_G}YPَc=)ӾGէK+&@&! (-+yfו;ȝ]yΛlN6ƨ{q:-횣߱.dS{g뗱mVyBKgd# dTso7P BQ9Y zElBe!59Z0Cıbvؒ)D-<`̼j*Xc Q!ʵߜ!jģFx tBD:]M!9s<0e"wv)ؘ{)1r~(¶xdr3ʓb-`'΅"*o9XÇS8u`puL zerO`گݴA!)KˏMw^1͉RWd֮tI HyDLQ!T'Ȁ#up\gܙMb4ZY&XdS_0c ;_kПFο tZڰ QY&9|Y5z3!T1Y琼%#u2^VhVƛ^>,*xJN2܁EW6!AݪO߽ȿ2Lsk}xy^>s` j,Q;;fHJAgv-x"҆8 ?Vu>z"4hdp%zFWkq|ckp~ﯠnç竤 7OaNU^e9 , 8Y }ZcPN3;&CSEʪCC ƓtЦ\V,hvi9'͎xlSyY#f4%Ч>*n$J}Ǣr]f`1UlESy >1ˡ hD+H&<3O;d_"K'𛇞ey?vo;?|I'A'+{.hw7#6di''^&ӷo3x6AcdHᯑJo~CߵWG]Y5 TH"M%# pa'Z-+޵ƛ٘v ܉[sFw:  t)x|œހNB5 vL:׊L4J[g!d%.YRZ`f<; c}*{ GoRY$_u(\F\,y=ޔK>WGKfW'?-Tnm2-~ yZRu~9k=wwo~U}7Mź[_w͵V168Ħ5[TlT(TXv/JpTbsMV!!ԽV9yXB pM P9 އ}X}F݋uf{ p<!hb q#tW tq9鸌yD`5 ta:s  hEcv 030g*YSG8CTUJmbgxuW>gQ0͓, ZK,8΋8"q(DR Jah:dD-ȸ0}ȴʅ"[RŬElce qu&ôpX.oD&I  (S$ jy(;H>@yr^hP#'>)2t\dXEh)|@.X5 uB8ةFg[UAd|lIP̓j <5yԇ14^97V@HG}bH+q#+ +7K2O]՚8bIYi+" @JR03 qa%YQu0o㩏*z5՗o+5K?w;fUnv-׃t(~7N bsXܕ|nޝT}y|ϸouy|_M{x*@õzZ{첶P,A],:l9n4(>BzY#Wyy3@1<ۛQ[oZz!-nT:Hf`e$ث-|7$-"64J0&9aT9JUgJgNPf!-ly6'[3qSQ1H2W?6-xǫ9xwz7zx2΍~ W|<@CnC%t%FBz=u$SX*`$`V⛒J57VۘqPJfN8`6j+A ^fJ*m߫|񾺬Er۔G :'' ['Hd Ђ#8a!m+Z@1ZtnXh<3iNaK7 YH RiVmrF+}cv ޾T1U|IR ξjx*qF޶dB09mTgG0! HZTv?Ew6H(%IAGk| JR[Ṣ&{BniYmr$ )9#沐ʂBRo($.U.*DF~DVi`vPOj`Y#Ypڝowtbӂw fN=́uQ-9w~5V{F2JVuFe]43K̡$z<\A$`V.#ќS -y]q1)-vKOkǓr<ʖ5{:W?)sSjJp'H|`kl? R#I<th_hLO߽(we{֦erws,<Mޭ9IJ#EXdC~{F-? 6v5ͣ{5:L*D!^HlBAªA~B"R'SEi(yEFQp KqD"!14L2o!hF97N~@0 'E7" (y~uFv׵1? _6t5h(?`ׄA *+Z,e72X SaCcOYr3Zc+ qf@D1PG<:*Ah3 6 'L8g6 \aC0Ӷmhpdd*O8KbOŒz"Q Z,ТA" !\&Zs-# hbLBN $!J1Lhq#0v_hQ4h~F[Cseӄ5 ,SBcaD@CK@&@ m 0Ll0c>#\;ƘČ(ƽ( )B&L$"5 aG2d0P "!8wBH1D?.Y|' +i-6ˈFT'Ihorygxl0)Ww^F8 ~Z~!rGw[ S v7]Z b#JSEc<eUJtBƘID!3hxA#*"UX⹓ծDj/`վ54X4'ޞ)^4ɢC}ĉ NՔJ-P}ꆢ8pƒ|21%˃@Ib O$8K,3䕦8& . ȸ9]'1!XF|‚3ŷNh4aYXaqiՒ4[R3L*  eg,JHht_p/:QDaeK?Kro=)('聗I~0n]\>{L%x~e[FŞՕ3&*[w!n#?CiAǎ>l8qi!Z2wX)5qcb[ #.$ꮳ v)#xLK ZTAMS1v ׊S/vj ,J/ S3s-jPn: &/4n5=vn7 KgY0AU v SmV]93FE&-=T30S /'Mjv,wU3+zp&뿮PZlد6'} z{y1Mb ~P ;(t.2Kˮ0g0 v<{ws%T* 3:a+4Y +M9ϰ{?]Qȇ &o &@})JvmU#XaK,z+:ը <(@Ngy^s3h~qћpSbVt&&?Pg5؜Gw#^˚,."5_,\ AX/Ej/b>꽆` WA:{;t*T5` Ġ{`i5_]٢7fy ̵eŴ2 /ZCW9Ը9&40!K{^qͲ#Hﷃ> 0I_)X7Aل|;pl^ŕ0ʎ%m2>iDz_jKVH@wp6u_VZXsoEt5OA8ݝIJ^t3_.@>D6E6$c97NyTu<7X_V]P cssMoQ;L^:`}HulRoi#R,:lVnQqX3u;W9ւ _{Mq q\ VZWC0)P!cSTPuu[ߨ 6Fd6SS4jc B6a0G3hFHb 7әNHzRW磎óC;%Xfck_x8tbo7ݿ4o DE{rP* rX *V81 IĀV, Be"q磝Meo_у[Um;;+!fI| *v\ӳNv϶Z02,df~ʑ K0~ nf.LjAWXȇ,h/{Xyh#SxwF]QrE#Ul1~ gja5q/1xI3{ϟI-4jt Ơ1w[|pXרcUm /:y =ug?aE%:m֒]u[7Zxx0O{!:G:hmXۯF߲[geDS)D"!^c(JKU?h_svI=ZF=l}؄$D2/cp@"nY]u#ݝ@0pGѧCYkx>=kg1;rӌn;h }B.Ƹ҈vkݚ_$nYeLFt=3 }QV[+S?4 !IA;.&rE\ GRcڞĵ'q7>GnjՍ,`׽UĆ^Ȓ32ҝ-^C+MX?y+\-jwrNDt%óo[H2ٞS<_[@ƪЛWdJIV /Q vCo%HΫOd4q301̘{V<H ̭֚|=mr3QUkDy0Ɂ_Oy2 CE$co@ʯ{չ4֌4fr"2R%nJJ*_=iMy$v7a[;|PvfJ Pkvj&R r d)HR9 &o휛XVmv]%:5Ǥ8k*uuOe;lW~_5Y ,q#ac8m_Q0pG 6hr'|#Oq={Ǘhlbc5-&6:KS{aLtcZs;\LR}Gr С|kDb株S&+9BB>OmgW [;_%w|Bk9G}k|-48i{ŒɶJC+D=n^4C{=4r_S^sPʐy cQ#1\_tINJd]ań "v "+I̹$QvǩqZ(\ÀF0jB?%迁 k'O0t<R#aCeP[EUJ$ogdi7YnKSQf3wsaTe>cp:B36e A~3&ީUmh;˫"S3s-}@AILX7yqlwz(`@Q,M5S24&pUfțz3M i A|.:XnՅ;;LF'UXv͟ߠ)/㓑h-sFyvhس5kQtQ5@Ur7[wE <}-g- z[+-] ~Y`: ` p{ CMM}-sW<#\oalM[_(YӦ3ঔ'YY+hӐx1cc/$2L"9( Kk+:些YJ蕇ɟ) `$vN=,+|ǜ/GfWN^8/9Ϳ>{/ ԟz|x9iW'oⱸc])egV>w࠳_u`_Ά7Lʯ>){y1Mb ~P ;(V?Kjia`RA F}ͥWYN'^UƔ3lUCQd~d:`3,ޏCWT-ap옪?HoC;x][enpNo.e꽆``=;YFI5` !,>CU|B Y^;smeY~1jju(UE5#0l &gCm۽p8ˋfُ`^AdSO$W`BǛ B}?qvt-WWv4N 6WL Ddq5+e빙JA zDNMw&H51&jdIgHJ6%Y'8s9Y86"SK? rc>x~@>iB6Qc"1v[$7N ꨌ!S4xJﯢh<`<~'?\5ƓN 6&!%!C"CTڙt@s" SVlϬa߄ZR`j\(ZG48s*>9$uGB]BkWqΰuGcx.zWĪ=U-RZKyˆP&š}L;qqCyl8w㸁bs%ݤ:0|B• cZˈ7\ DғF}f3ƴfRɕ<2xh<ɛ༁:JBI3$؊A|16=8/k<;:/{T($OQH KDZq׫SS`;M%1&d Ca+ U!co}|ċ<42U>) [n'2?ZqP4iǵ@'@qR)F'eM&ڷh :}X8XeV`AЕM؏ۡ&VP 3 -<(-h 4vGc9T3}4r1G+UbNK"ƌG.v+RWLAmw1b': ]8WZCƻ0C #oWN^8<$ɣq%t7]+ل>W{4%NSJ9W&d),rGLhXb<%BZKRMpH9mUsIOŅ)gN3z 4nD;|/eޅ^+ 6,0MV9"K9xbX!cRqHX̹4` qx!cI>8A\^-߂=c(}w$ڃ ,djթ`)j$cwAQݱv@Ŝ(]spxq%M5M޴H#JiPpiSBmMhڭ} #Bi4Ĵr0sĜ ^-Wv\Q)]47c9nRɴLy\ɢiMD+,m fq`VRjn5"0ԟ L>V&2e%s2_}rɞ٨ p5IH$2^{Y -_`~ k0+Kpg:j)F0ŚX,Et3GҴ [Ѐ!//Pl8P.BJ+:Ȝc}6hΘxyc*S8g:|=[_:y61O ,'L8=y5oH !7l7 5s=?Yp]ӹ NƷbO"l T~~I70 dܳ^s;3ݑ?͟ 'N_^?0ovӝ}i8yp Mvyc{OXt/ {6xx=7p"˩,xPJ3xU.)\ I ٥W!⡗<dz{Iixh~vE ?Bpϓ-`߾^8XoN.>}P<v>]g<ڤw10 # {/q K;GGcddMOwouؿlN۟ G9ж5NۋbxV};}9,~ֳ@4§Scx rbv钋ٚ/ #N3|" ~Y>CVveLF]t~STNhM3K'!$O#3Aw=B &x6 4v?zĒJ5*fvp]c8]WػʜΘq]j#U{/l!?4 kF^ؕ:FXx @~ t)H=K`um\Ra%"0Jº#XTzNe!m!BLy{*41 齡7,^A#s[6e71+;w ;zWk2{ Es,K2(za ފ+B;dY'tf"&HԌ~o)c!#+Ϭr:Մ e$5 bNS/{㶑0O'$p'ylcź%=SM(lLp﫮BB<9#{xfIWn6R? Պq+.祖JPRq{HHq!sN1HTRM|4cZN0T392;D0$X !EA‡LB(?e$ɯc+΢5H2)֊jbR'Hǜ4p,5L^:/{}#I۬S+E< 41T`HĢ-2&.q~0ybLeSx XR{_s ߍe25_k6-&CmӃiu>Q-~͋^a&#OH~|yLHE{E1#uEJh|f42S4d"5"2CZ{U!LCTY7`uh;9C4V7iVTl5vBlB =`mx0GXCJ#J+Ƣw‚Z Z] XW\;3c\rcqL)= aqL *&bAoc W%@f ۿRRFR4`c?{ԤQWE(R ֆpnɻ\p*DRTn n\[aWq?ŽW)=u@ᅣ#vbAs~:_׳-KXI @ kvGu2z!seK,ɻI2٩kKMp=LmnIͿ乡p&SaLd!:_!''f} p6v||)ؿ},qU}rل$7>HU,5`8XƧI#3>R "p{,~.phK? x[X2#EVE=g~ɐ?gQ8*Ni}閍HWp鮯2 ZF**"T݉uy= V-av[l7민Xlsḿ%_*{ oz(s;؁yĊB}q֤#E`#G a;%"#j6~)ҨШrE"+~i镫1ܖ8Fׯ/yy!J"E}:g+-T6'쪤 T6VFHOe++ݵZ8~9w0mb+v\ NqcDI$8Sl͎ ,S(3M) CXPTE2=.RV%^&F`"8%T`aP'sLajbJij̔V%S-! #RprBo|mFk)YBg!r\)In`iO!"`fߩ@QdvZSτ:E ޯq3"84p:CG7gWåHi]|s6 9k%BX(FfK4T"B(y5;c;RQ] Z][HL)=ڄz(Gs$Ou30]"F NU *(/PNtoriGW;aų,-ǒTrϓy(X]>>YIEI>eN!ncO}#`<,Y ;w3Z04C) yba15)bEь>} W3>*oT W*1ނ ʞ+ p{R"wr[V@l $_1Ljd1|mo7t zٕ7Q0~I#(J0<)2f(!1嘐PBt dTc7ѭ|;*u0: 6thjZ%q8w ,a%"B $AtLhe$ Ri-pQxD/ ?d(7rs~(mXq®.}OI=}v;l8J)9pǜ1vUs svUCk9SQ9]9DCs(c'j`7!vTԫU9=Dshc+ʇ~S*sȯ%jSNXaαv-)q]9Wz(:\!wcb-5lPyIS^]c/ÑL!OUuI54P+!bͪUM=3 H& OR(L0)F[YhGRkEpH511蒎ccD4QA MsL]e}ZuwgzY/0qKV/%=pަd^b) a =/X*SKZfjUwZ߻G쪸ԫ8E9=PO%ax fa)}=dw@L_ Li٦3aa\F\9722X "++2(,Q5Vֲ_Su1E\"fH'΢ ^n s 1çGS$H+/e~Sc؉P5#P?n"y7I&;Sruoa: p}Y Hʕ=Z{)$ʵ_P Q1rQ@1Z15ڴ;!ܟ'm;3Lfu sbl?(reˉxՋw>1 $nc43#R?\]*l{eU`yKw#7ĞF)W;ɿ5f )WL FX:.?]Cb@#v[@Nrk޺!0`.DWЭ8[tZuepBq]~5*њ%X[b'O|G!gS [IhEs ׃I'AUͱEƱl T0_`4'=~}7$.^bU/uSQ*ߖh3%E{K^~v((="LgQa&㖕wٕi{5BoC O`dֹ%9jZ7ow}L̊*s<[Qw?pa4]}~B74 O[:XiJIҎ\svRS;&\{]w%DVd*翆oL~?F(?=V?n߃*7w!EwEu~ףVc'4jqrrZ?ݺ),ƹIEI'QfCV=횄aL0A6st,}`_s7`uοջ/oI$A2Q%׌ͶH,#s:42ɯ=|?H(ø78m6åDZl /A-F5fJ&ABه_Cp`c̅{eGEd?GLXn/ *EDBFk (A!CbJ#=Fh"W,Ww5\ϯVJ1%*F`"X%BLIpNiBـ0՘)%3S]2ꫫgPB@z&ܝ0(tbnӽjA˛?.$kˌar*̨BF=wSjΠC=çl8UĽa8 `֔ģՌaS8WZFV1t.o^!Δj:ǎúI6vX~!"XRTS:7kǕ |^Rˆ#RypnbĬw(n,`OpsvOxxm>l2Ev!8+&/kJ7lKd>-á!Mhog&C%T f_0gC27C0Y|dC*XH\_i !~4ouw M5xcc"xܓ`ZT`4&jp,e|/0r._QݶG\iWs9X;]8'(翢H<u{ɋ/4E;]~|mHj' +k6Sh[E I՝+ڧˉmyb0fd>j$/&O8Hn>.y* -Iz )'6uՌv|-@o!6y t::,1V(o'oXfgk7Qb޿#db2'h`Gu;v4M֙aA&ಂ&+~j0Y<&݉vJmgϣO-ϰװѐ<̞OW+b޳D _ mXܞӢ/%Qrl~lˎȖYa-S"9ppHʹ_*MG3L)pGIήWN+~+F(5<`09 d x1rc뗏Ᾱ]ni"(IdP$$q: &)#K+sycɴHȴM,Ễ јE ^P.,cXm7w(fF%oaZ$TWy^Tfq/O u! [ATdPQcB\ca eE) X`C#Iq{-?onESRut&ʸ7[s'\P`=҂5i : .qٵ'OghD3D=;9s=ɓGD0zLxv,חl~wz=:7 }8}8?;'a.wpfNtfO^C&^{⷗ <,a'?޵tv_depv _?{Ljп$?' YpvK\{g~}u2ycxd\WdqY rΕgs;na 1QȧEtQ." 2ulWcY|ť`}{g׽)Yc,]8lat޵̰u~Z0WOoߞ0?ǟF0qAї/|@,b( JOS"8cycycycyA啑bU^TI\pG\gnv -(Ui"i"!Q(r&$v!C qIᆈJr*׊jrŘ*2b^e=NePQL1_3{Ðdu]9.AF$Rm#I8J;JޕxAcw,iEl3dȓ:.$4, ^+ bWO9p+ ɽ B^/Ϻ7PX$4޿ ذ `Q*VCEA'Hp/xH%` Ӹ;>Sӷ6}?Zo\-0j!s${4s au C%fdV768սȌ'EPnw$>(BYn#ShU%A}WXCs|)?fh@٘- -WʙQ9hގ(%UFqBL Rr] R rZ{{y|%.n1Kh-&\~T; I ψ<#:{ê+YyieY郣?/.FR85 \8ƕ >˴-ud-w긇C;O᥀J^y˺b[,(T|}63,>&U^i,jM7W*:Ume*O⻷Z3}ku D sx( +qũE)(Frd!5 FVPņ4.ǒy{ȂXG*7QnMʠZ6-2`ZUonR}Xԡ!sS`7lХT(oom%ZU:)q(BG\5ᅉ3O8k v8]ώ[qo:M^xWmyY^wB;_n Η: PK}+,^V*(J[By0JÂY2**rO Øi)Ir[n#W%|F!(::&bX&[BBERbNgJҘ/[>p^ X$1b8&0xN7׶ԶNjXRJUR4 ;H߳ҳ!|۝Cӿ6JOϹ_-$ӢO͘:~;kq˘yTph E-Bx!!|FZ3i ڱ }ݾwEjcbo6bX՘wgY19Y5^'78\C_2xzfU[PvdU[PKO/>2S{2^"?yc#4bgAIJ\串p;DQ{۔oDaWOE!i9z^[ _ m{7S pg(`u"P+#!JWyHYZy90ȏxƗ,$r4G? յ_ˣ;gW׍!Wk2C#[aCBu=DlTr@bc0(uTs[iӇ@ѥB8|RK9q14dzwc&KI)3JӵCUOL$u%cWU*r6 mzӞV5 ޻ !%܎Ew?"1nd*8a21N#! r}>M]"1ΧW__`3iձ,*A5_P7^fF0_cid%`GͲVJܻd|`.EE*'8P-11HQNd X(`YIt]WvquU8AQ\KFB8~Pt./L# ""XN}5|hjRDM-ē0Ҝ($F|K8%bb7.) i9sVB+k {e6p'Z %^DIse:vW_'Ec8JPehJ#,jrG;McJjLB1˂<D)X.[+L=y@(X[?ԫF2V3Ě!%3@;z(_=:-˕ee`9.|ϋW۫6FW< fPˊGиKԩ,?=b.]kp5_<|=pi9%^_{ _|5[7 ]_[ߞWח}pk9фڊ;.+ 1U9 >ghIA2)N=J!Ka1_B:DgMhT1HlPEMkB/:Stqp Ҥ,e<^->6Tܴh|D*^lP\RMQʈQp#goP c1cM(ZISL4$/F h!EL 䟤&mnnjFHMe ļi~w APiBQ UrȤGF' -WY쐅ïb- i =TVSQ%9+l#lFl Բ-N5жHR99DKrfSQ:B@cc\PIEhd2D!<BYJ\ǼBYUݧr5xX5aM7U m 0Q'ʶsQ(f'wy!ӁH06xB5$ ͦH"1#E趴F&8qWތ/qW?I͸kt( 1D %a|Z>fJaEɤHA٨;^U`>W$Q&]Dq Gdc {%Q2V4b!INY {ЇFsSzmۤbf0igb6)pRʠ7JaFi;̈67AbNK Nd[KU .[F +])(p̸vWWʑ9eH6(~J[-mf'DyӪʢo^Fٕ,FA3sbDWD@C4 J-5(U|j9[i\2-@ G7nK{X|e*wVG?^/ACe\recD[6*1Z,FguOM uI PCFVzaNH4Rպga=5L@G!Yub`c8XdC*@}(9)螵> O? 0P@ 0wJ}ʤ^ aM28(њKxЉ1=x(%UG{yO~LB@g395#5켰f z\(#eox3 +DC@'[.Xcq 9hixY7jnRo̴W.<!lʌͿFcō{U֦rUJ$e1㘈F 6߀ 9 fQBɘ.OUb3\S$I2w46bR > @66 ɭEPyޭ{ Ybsg i@^|P#hn VH"3(ЀB40OVӻƄ>a85WQjz Ll崊bLoPŸ fLP\rT$cuJ% >@zt1 6F͠vj<i *$ӻuO(%NL W/BLkEEW0j^4#4< ҷhIC15[d@(mښWh3kn}|JT'La@-F0&ʸD|u\$q1:޳%~)p."OP"! ,k;&ɨdH ,39[ "Uّvm 9a Z&wP5pq j % (Ž̐v0 qQZé0Ja){z`1AUbC-`4G}< ,6 Y'&(r݅,M%tڅ=RzDxώ>'K2(Z4ںVt1eiW읻ֽXj3ánrs3NC FW=-JݦjE0_MCuL4FOp᯻59w?4ɟ>E TEPFZT12e,uӤҳ23U75i__ _|m0p(uljTS&хA*c2DsO_P/YѤIʃGVspyɇ2%BρKpҴP|>ݨAoD*$ 払{5uYM={ jPO\%toP^F92iH4M&ެo/KKŻ!"Wnoj\ kS#sPxBBDW,qj zL5|2cmt^hh( z-񭋸grJaTٝ`Whe ]h̆pD Lww1XZBBɻn,>Oc."$ |CG_ْQ2#-fvx=FZg|J=g,7 A%.\UFZimޕd" F>:c:i#IiEV1,Kjv:+iɲRG DȪzU1sad^!/1~2{ZOo=HrzHVb +w V+qzArMv+F. hHN9F-"(&s~v JBRe84оcHF 1_b&"p9uOۇ%!1}! {,Xg -wKb eױ 5vF1HD#Ŭ2$1T*D1Q1D[y!e6C)( q"CtsNM̅EHZ9/sA*f((1֊ &L ?VAl6"Ā ,4Vm\*ķ9gß ruv4 d6arˈQ($FaE  H GND1juHPթMzf}HVfBntnL 'E4fGͼ$j) ?()JBxֲ%Rnj|Xv::w5jkR:T<ǣ z[ o&56H5cif06Vef)mxk Q"ͬ#fdbXGZ\B`[BZu-lffqu&HTX !7X.뮅ޤa3lRo>ο#1R k{9f6x˴>4j;7H[_?\2(%^_v n>-hbA1!<Ƒ*D E(J rجA:вֿ=%2E_a u[ e 1H[0rYo ߥY~;lᡞƻ$$h3g.$/2ƽ|?LI9/mOIXU׃faLGJR+;|<JFxud=Cl[i4jQu9R􈣬gExFh|8Lז]j_V*V--%j!pop~~۠{k0o]n/U^ m̈́wEn.'}2켼8_7N;y[Dɥ=׷m7hɗv&.n|:WNwA~55c^IGBo<2H ᢫c*lDpDJNIE$d,f \~X: s'd@2 o[7&v iVf^v}Ut=Rs3DWI KjssWNWS:?Z9H;3ߘ~'vIUsKʝuA}6[j)CP;m挔7ܼ8:W'B;Ŧʙ~zy.lߘKG}㱳W&*[Ijh ^5A䞄ݎI^ZRQO/~=No@sFYH7kqdmB]\pck/tX%V`$,ՀkLי\V] #)*~ ˥7%~e$ ?3.."r>\}B4aOcT`@'f3 (KߐC~dz-e5q̼ܒo^KO֝*Ru_2y=R_x=!KrO'UץRI>+.ӭZkuD,fDiG,EH TtQ8c+,ZO1zLր0X  E᳂ʄc-~3<7ؘPGI3a$v~Q^0!)C'QgjaJ@?٤ʗvw5u>̈́w_lZy\~5칚-͙,f 7ѿ²ɕ+kS9E'/!\OP; +NY ^Q5CK 8VA;=Z+bl F0& V  #==`>\x_LYZ{x݃6/Zmxp,^u@$?98lkJ Xyx]5+UrT+q(>ZwW;3+Θ #nt[5mPaDt]J?  QjP '4rlQV ~>r_P69`vU_x=z8kSoqYqJPܞr\r }CȍH**(-^^a+"OImҶlNFEebhM7 tȵLt9&ƣ(/riuyp=g@[D5F$uيǁK? ӕ*|ć icm"1 /ėX 4ƌ#' K"ђADB'W1b؂߱2CAL(ƒEYU7؃q@g)_7}0 h(;5~.^oڭAv#M ?/-|Ov F11HL*١nMvU[o\:HyI ~LBWlAv8 ZȲ=bfm^v#4tq0BNK囗]<=;yJ8n?YAR|Oo޿<s\ULrmnZkv۟ޞf~\x fuy^g֭*T:*qwY b8I;{_K1 iV[rC#ݐo$[|uAƭ\ǐ-uScbdqpG Ao󝯸 o趾C j.fJY1N3tc A&ȠW:<8RIvn?j[SC ɸ7). q?;ׇFA`:n:u=Hˆv LJ /xu i aM\IdlhMox47 l6E=ffu?4()̚Y&50I Lj`R aF~ E5f~6/]N+6ś{0>3sЮa^eazvg9tRmP6P1/c}|1.nf蛬lX9z7Y =̮eH3Ɍo J,s!a0}s9^j:Y|_A78ő249aSw9,35iG9@&3'׏SG}v|Hmg־~hk›ӵk+%i%a:d4@dEkVFbJfw{•;ұ^U/qm @JLB&]H$ PCE:ܓ"I/{AB9TE p< PZ7 ٷ1o [Ԭ($cDW;>\5X"ǡTI0MH}!UVyqMH& uRw7R0-%)j:-%)mvqr02l)_J)i}0tyn%Ȥt؇v,#ϾGҳq̓y~2(ൻ?_! gvzKsqj-o}rkXV9?I:kLWZA'Rը̨yXG˳݀(=3[7,cqPia4  `P?+CT"C)!plIE(t8GCjbR/sZ o`hc/9#xi6p1!:Ǒ>χ1D GI1b1v8o"+H00tjd0V$4I!#ˆf2$[զwAvX-b6R Z DKƱ9Ij)RQBݰA -׊ pPcp7d,YLC&],/ҠDcmݖ F,!&F!!و hhVuV:g" Pa Lj̘ȰZƘIjZX]ksF+,}RʼVm9ul'er3]ԒK AIIl snTPrʺ $_ۼ8PD\riN*eE\r>$ EQ)Dz>;e1ÉDĤOW^τ !– Džx?SpLBNĈ,(:$GB-'y,=1ucB=' yNn^L7FB\%|P'iA5; C-ZF 5}YʀTJ(r ӓB봰&ql-ҥv#8%FV0ѷhe ,N ],f 7T20Ir1ZmB{kZZbSCڛmq 9AQb=#gR *q CF%:Vhf$b"1H֐I;F(]!&9ɨ5}`Ю*Yku ggsAz 8A׌ɠAr MZL[٧%c6!mkdHB# x C`91I)CfЀ*Q @bCF:d2n.sm@295ci U&D Q-z,DB!Nx&m)>@iP")#,؄N֊1O pW2ɐՀ/ %|U*̢J `,*kcne­ MXL X jfCS): F!.d!ui"5rJ>̀'F3GX7)bTAc! "(Z6CO[H OpWO "$U4lp`C:F ),QAl>D{ O(b"B0AYL! 5_ē2$K<<(!# T5[p Air ,<Rz@jTY !#zy <ص.Y8z`P16 SA'D:[*L #c˓va%. `L29plP "A1x+ DC4GRFkVPWТW -T(=rTx~xbi ]`jWPa}BTBe 1@%定% _uq@C+DpANEB8Qh4,3Ikxt 5٤RNځ 8QZ-nel%]#J~I"!jɉT M0bALV:DLwve"7=7Ѭ dbגP.D evLƓB G\j`XT;í`$8q-*-``(>\pL1 N!'va J ЃF<@$9- d^0@43d:.lysb"NC}9@VC薇x$KC`Gp>atHp*#3σaW1$Iim!w"<]XfE ODj ey1clZaҘ0;K} k^yH@yɡ`x—d@8=2R"D2{{0F{ڞGHn<`D>LAG&(X+f @b>'xR<8`i@J)eJ ⤼h', $fD+ Wd!Ebz,å=‚y0+ 3L}p( RXE'\ }l?aiyBګfT0#QpFBYehY*JE"SYF< hYP&@HA@&|# %KX&o2(Kj).R;?y^->C谜_g|10-e)*L@#K g@7%NQD3j {Q\ۑ%,Qwg,e98sGhvtfcT ԕ>֐ 3lvFj3p7LJxKddU2:#9P.w34EuKh{r%2Pԃ P`=Dc'԰?+@r߬WȰ,X>gȊD1QgmT''T+F2ګf1<¬`\ - e4> UȈ "k!۳VK~e!@Yv$197<`R]XRk*>XZ; `҆‰p2^)'kU" 䘙<|Օo2J!M9R9L!nPh(f@U";=J{!R?Hr|d8˦bQVuRM|{YXʧSdDk/9k/g?6%nj+=e:胲ڕUUYZAY*(UYUצgΟkjTtKH).kMB[D-`ڦċӪyfiV܀/fdMWkGn4K?A1pĠ莃'[Tmٗ4Jld'KX6"x}],AÇ}7Z&{7j6|}l$jH'k*#ϵݑ_ғsXfUA" Fb Xzə.ӔJSN_ϗ}Ǧ{QZώ HwBɼ,REh&j%Bq'՟џq/_hs9~50?b-۔"BY8M!1'۲o_5e'۴_,B:4rNcbyB_w+Y~ܸvK|E#-UQxL֗iMiY|̅ MIY/g óKigC+gĤDc"zm>3s A9]/yYL_̝kOu񛴘<[5&kycߦRw [yxZ`J/% >@gp5l{@/~I~rh_Ӌ4j*,WC}8>>~6E~֔~ZzXc| ?싺N(3v_Q,#LCg޸sV/5-6/%٤o\9Y RJ~L:[TЕF7z]rK_E(o*E@qQ gmUi#ů/NF6nvY}eGQ}_ge*hyj-ZkG7fFrhyObuJrV>,,w$~zlwy0 zBj >T!~|Q^xNyHR)Iy[(/g۴1QܿX?2\kʤvҪFR8%SjDbTx-J]oK,C x̳iz4hҵӵkȪ+.P5p֭ҷv8FUV\iZjRy;&W*[U`k(/PpNM:l7ox3 Gx@IL߉=Q˖FY$L)4& 0iIL]$sҼ+c%+x ]m]: M÷ZUG0']7ao>5݋CRa@AE'suv^1_gX5{SX5z, +4|.0:V! &+MTx|~PU/xjĖo6b\W6Q㪈Mx}~Wy}\]B60-o!W6;=Ս \? wVT [wgy7 .Ubp'"Nَk ![lB%loIG oXmo; }W3UB|Zy<{$mFWr24P*.KclJ:YRAT'k("DQb R"R],ZqX+ ΦR-e-~9ʰYrxc=n#Cw28L#1+#1ˠ-!-uhplr[3`Iu LCU! #^ &m0%RG P51BPh$)O%R9"(uDHL߿yadCOTT.,z}=4Az ְÃjqz ͩXLo~c℧W>]iҕ yvͫr۾V*{dwQ%L Wח|EӀg޼TUW|Z_XTzb[E[U `gdj f:T#K20|qSQ_:wTSisۚl}ssbAa]5[+(2 fvF9;V`E*9rH/EAjdՕ{x%۲㵴R61ݳ3ÙsܥBs1tV:ͨ`&8-1[lHֹ xW]YH]16~{8. }@d]pWxB:=$>j9>d.W3񆿼uၭf]yCq\y}򀘌]y+x8<@3+x;gzUx\s)6n%xp\h 9PP &A)jb.%H6B$RjT7­Xl'It{ (!j8S5'^g٦ڽwW ZĢTQnh[7zsԖgS#Ǣ&"$4:B kh>wChm@P!pM1|ISwԪVe< U 8iݷTlYވ~}Ues7p6猡r>g'C+g9Wln{u)[\B"2~fcմ/8]XA^OYMkF>1z-s|.u>gyE:?˓i|m?x;#^RkCÇg<&>߳xpuúU]5fyR* |藧O>:iW=POyRkW/t}~| d׻>:׿o\h>?1{lHj0q.9ReoWouKK~l؎[_Y>엤ٲt9)kSBI)J g 2t׋6~uzwC/L`Q >=|G2el 1XҢE؞aRX hJ# eq>Y|$\-k,)rwQi_x:#b;Ug) yV.)iQ(IaIQ5b@Y1Rj+LKIĬdq-N7UKņI0 z髩h0䞊rع\w ˒1 !kUz(F DJF)*~D&< H}l& 4b=f&[z] 3yQbƘ-&Ra%_!Na`^_Wj.(^*K5ȹKZ;:Gc]zԸBmfW OEuKSB8$tR$T?sQF&ǀ_ p[ >KP) UOq2H~,ai|?%Jdwk [dN5(9EcfI4Y)-mf{B-10LwIL'*B[@" b[Jdͩ|u5),|͒SL\!$Fk} .!<6;R;FvHCkDˈT!r1j}dQ" *AI-= P ٚnubKK$g)đ5$X2,jY2-RE4$ ")Z+ژI ~bLn xٺ%r,T4x{֤o1S8::`4oQ+˜@5B9L9qH2G[ݸj;vJzl0X-L |S=A0S -urE#jʲ1PJ1!@EZw(*`1͏9jU {N")&KAj8EZD*12V[e  Ql LiIHVGA:&(j(Sw U-`7Hs@ SmF1 h[V59M D@;O;Pcf L4H F6z MtDKu ) "ӀRDB <{꺌F`2uA䑰0v$2s 0ci8Uؙg fG0e+XGerTcW6DBqOQb1`(qH(ʨq`yQ˲H Qއ8_j8-hd2uAynh[€T$ExΡT0i]gV2z]PNRıuK͠nЀã7 ʰMV4FP+l.QB( &h#>w:YdT`=bQ2Ǣށ]R%k A/9`tWLE:95bD'"R@mDMq`kVM iZ?FtQœ:8M6%(:V uTʭq'?fcQ佒"`A--*w:/p6'F`fL͏>2 vl\Bv-n`OF>ʇ{/N_lm Cd0CQ&n{r d3"Ʒ8اڿ3ೳţj0kvLgId{PzhvfcwrM{'oZfmj$A z )tP[NCU 0BOŹnA*Wz wWQz. L@5TigOH|[ 1.ܮ8|'u3g⯷w*~qn7k%Kn,R}kKZ ɮƺXj+9Xq_Q.z9ua_bGmsnI`?;:9\CRԣ0s>K~Ǜoܞvo?G?iECzt7f xgg 갥10^o`oeO_|}G/_j;h=˿'m?n /qՍԓ!_79>;=lBЌ_>8=9;<]YoH+_vgR}EUcFkyZdV[I\F$u:$B?t/2"#ʈA z2}->>׎bq|=_SY$+R ĥrA#A+kA#Mq4R?O[JWS\+)!jFJ"Q,5.7s t8ʼn"aS^IHG܆|0aV P+{ "p'W?9WN'7'1psqujJJ*o ,L^RfovkqhZ_f'k : wiӌJK{HvɥN[n`WuiVpҵJ5;~ǸvkI vi '@ kABv'l~:k(ziKu&?w? a_Vc6!1.Ȁ"%H.DYeDԁUHHBb> *g- ɰ8koY⬽'E8k5/N(I.k;k>Ӥ ~T\γ? b AXKVKm}Q`lmxRZ!`/A͒x}͎-|%/[A `>pK?՝%^.$1.s^QR»_pѷx(DA9DF.2o?J2EcK {3WJ4{}+WʈP Wd{/9,$E I 5db:IIrl 9ӱLƫ{|v8'Q?:En0m8.9sli) cpw~ɹ(Rf[E|T)˽(M{hP\/9a}c/%c7ȂD%JJd,?6x> YU +aظR.ߐ(%\ĀNjNjM"#9du ۦtCvR(j*׎<"1Ja|>m\xriBEDcMv5jC$j'ѳkv.uUUzanVǰz9n>"JuwOO)Է}$.ڮs^]ѾjH .>/nTb]2o,sI%W:Z/>).P?NHItɚ1%W6W٥D\m}~PBd)-&'Zs1?~,6DQR90D)r rF^}<xVFs'FY 4ܸoCQ /GԤG0eXTƚ$*XRKq-/uڣFqFŀm c^Xk{˧XF wq -ubN,3#g`i÷Zm|KĒ[ZŬVٔ[NrЖp,RʘBp4J I|%6uD&ͭ= _ǂ#n(gXT)TF1m-.w&˝]_ ;AV3% p7[=֏Iޜ0?&2sd%α  YDJpP)+?3nf7b-8LCvVsf_FHip͛@eB?gi칈[%!\iSFi8u* a_nT,ݍ5+.3C!P$p ݝZPplk_NJJF!o!XϜ@wqJXofvibg߆βƉ͛C&K~[Fx9qRs76ѝhyv_aEN7W\JW_V6><5صF=5r?+ x^]. s(o}11~,4l<̟s d4-c>ϫloO*}jN$`rCP$Llu>oo@rwo ?y6k_R<-̒%-hTc"c6I>r) .1@h_p>^{!_'/HМ AR*JH%4+,`qe9x^Y! H|tWbpO@VH@3E(]c +Hnͽξ̛UAߥ02U~rfTPb!1Ϡ|D 8H& hv^RN5thIv<(ӅaqOԵpyE#{ݐIQ.=J%K¸|'HXPuL>Q UDѠ`N} #aSu9 ᵜ0L>㡊mw{A(U?QW[Fu cߗ.on4F`q5^Nkq"7<ۅ_$W#x%zSǃP0E/1:wBy\Qz<)!Ѫ b&nOR"-(Dk*hWCnHOʕ.bqs :cYlXq<~DyZd=%*oXFTWT="OZr1<.mU yn r Ɩx9ݰH^7KWW{֌@M'j.R=_)^=R_pYW|SC6@,\{@^%5дdƒ~Ih"0ک Bj2*uc2&y;;vz $b^պ͹SV.$k|bNޭ /UvbL--Z(&m5ITv<(H!*6ff?t.[.GQk#t͖fEo8^#A"Բ֏f?f We>$YT~*\BNhXG&rϜsч™h2)å]GBP}.-Q8ڧd7;\釶0Gsb+wŋ|jl2rފٰܻ;Ǧ̎Ei?a] >^cJ7~vWO/J3LEu=`3Jsi}*SAOIM\x؛56Ծ?|, HEu(g.Fˉ8tǍʫ%%.}|"Z޹ ocbʚ|<<C"} J4l_TsK}`qw$ǷHb$L1[hR)Z߿5'2,nw;{0Qěww|RuXͦp E=$U^2Kkܻ[]熄d؄!S&d/=&kN8HўMӳ/B$*n¬Ě\*s qtv+ Jjӽf zN1NS@kR@I;GexK9ȿ{9x. |?V X+&EA)=XR*Vm>oI!(8>ϾZFgs ܷL9t8s|8SIIs(":Վ=wO(FAI hVa5TS_[B 05t Ug+#RQI/XL(A@3^gV^fZ‚t\XH+އ݈%3_<-dE14'>֝>H0{4žkS׸O-ԇܕ:c g5!˜18&1=3Ih'ǁzuź>lS|D {b),q礮{ RE5.pVb |iDyկuI݌3%K%$iNcaڶv8Ytje_I?w#&4AԚjR.#焪/?ưOz@UTiSyKwfE]ũhl} ;p;X$^|4yz43%L !]FW3=ko۸Emq*$(6 IaA%nZn._Rٖczm]-S83$3S?KR@sOMD;]Sso10y\ ( 6[.@ $>yf%~ beS7SSНWQsI]QANJM0X罳Kӷ SĤhB5jlՔʅ3kS$3n>f($)&1h0;KuNJkX!4El5,' Ɏ%Z~JQ$/[wFUt4@`dyK |c }rjK B*8hf` EeKTtljb@DMjPjԬDB\{d^ ږ ػtR95X=ᴦP]akTeUT`^Ҽ+'o9͠觸񲬫dV3v0*75ޗUPHkK,XWڸݳhH.81w3@:18=50V ߛΜp6 0#%#Q [77x'[Κnǣ~ ^õy~E'DD"&)Y6@tiYJ4@^Vg-+YԵ惁NZWhG-;g:?O3NHn>2 J^,߶'.)ѫ|,P̼9\gv2fdDdMT着iUwsu~~(%MgAD~#Sck|w-ZwøJu(rWkTl%KљmY/nF8]c[t1Cgҡs(U26 };Ho[~nW-[ {~FӮət+rO̽baX/F_ъF(2$/=θYUUτkz:Fp]]6H*R[;5*,lNRk_Ě5qM0aV9G,"nWâQ(NTWG1R7?TH*A}iO޿9ml`Oi9k 30'8G O0؁!!c ڙݟ:_3hsC' QY1ni0a3bpmp`+BCQ<=lA|b1C.Tsn>׋)i2*oݎ^=h$B |%M #@8n]%;.ƞK}p[oF.7lgZd7 gfuh!Q҂-/4o-U*m8Eï}rnliokvҶU-mfi7HZF;##Rc8p Ad[jvZix;z}7*?׽7Wg'In-_{}//V)KeFEu5Tղ.޽{|!ja.UZߏcI&nC>.Y8W Ц(`Bz#ea),|$3,rVE Y!)BV'K(eKTA)?9@ކfBA*3݌?M|cː\Z :HJXi tԅpHTPB{f4.TEs]S)*y=Q^̕TF˜o ]5~ٶ^Fcn[We| ƭWV}3떥%SS[qnO$R.kbR^Ven`<%U(R9ŕF3oex])E)xS+5Gg ==ʹ2WI~t[}IiuT+q+/Msj%.?AZF LlXlogdlA7jDM%|LUߋGzq/ Iިy֟i2$mSV⛮QMJD8Ut>zwyo"j#ؖA%}㐶~\zjSP35SAq%.u"APGX#x.6GuxXB呰Q|$I.,~=VéVM5bjSI&+L6/dzߍZ\DzcM飉-Mv<3zPv(iH$QZmq Y/Oב+5-yMl)#Wq{3SX+"uS^ީYI\a52젏 Dʓ/]yq>6Nq:׼)&*s51n,Ha3Ͽ͖[8dKpPA fZi`~{0\mjZ17?dBK;݋~[6&H\ m@QnB~`,#:9=gtƭ" 鹢#*ܣqY Fd* EG^WhyA)CX6Qp]ۙA8kQ\+o1VXrȳ^ҧ\PPY1V93Jfۦץc8~ݻ?Ž-csW%'8ŽݟihE 1Y.mI`kv'casm Q87u<>riCS@N \:gr-1gL~>^-nE&on~T[UW &_*Wp5|f~%QjRGcu1[R=$}ղ5R$Fr٫쭑_jIH@M︝UD##-tXĜHZn&Wt˯$IGߢ4[8` T͖/^!bT1Cf H"3G.v(=nf@;y;X\<6t"z( bϿ:<`w믣᪏`@OvŖ".<]P#_D~YGsŧnCGev:3>v+mQ5zb{:{jv硖Jw` =3zgP^Z%xal{ԫ( r0t=+ ]k}1? suğ.k]QeZIMj*~3Kz"_z59E]1XndnI 3I1'8GvJZ~p3?iu8agw;#2u[%WU)Py 7ZzM7c IH!@ Q'w8T\|PR4`S&F btmXSCt p>@iW~92޵#EЗbmd\͗]Gۺ5wWlZdݭ㌑TUb,VQi;N(%+-K$ WZ-y]+[|G志P ˂$㎌:0.t{R҂-11B8ʈ!TX̔SΩD$%ƶ0ZX* a-j K&xj=XJլlߋCDYY}J1QkBue6pY~Oa$-$h&YFJk?}\EB/z|xz>4xl~e4/!m F? 3wc05YΧOO\)ʟBO;l,hw\ڇ[Q|{twe cA1/<\q;%Q T)֩Pw`U#`\NwJ!)|Ip%fze8>cU.8佒pYAnpp7KWX/37구5@`o*pvA!%wVH ɵs+b1:WJ1f PW]ؕ^CF6Ǚќ*bD²_0gn2=vt0gARΏ=W+'KǛ>UvW#Rv`+Lhw:KhmXB[ac+z`Zdf坃d'Q]W#j"Z7X!:n- mxкǗ I<^{&eLiA8\yao:YnuUwaՔbrƨFm=+swfaR.nSQJAo6H%ZirqURQqXh 7$D^-!$e=0eQA!ͻc#Lڦe(0}]G&cww.weoTQ#x)w͆Q5!Ĩ?k|[>^별+!$@Z)8[պ든VlVzh$V#NS+b'wߤKBaWězEwQKb]L^ SFx׀q{Gq0^s͕vI^]]FbF:rI߻~:)[|a3y¡HxFӶȴ6s!`aLΘSlQkE|:wnj"JCoWj磉y@%Y6nr֪h0Gas( k{}0ǽ":sg8FSi |NI1:e'3ףN#O?Et·^:e`BΘ@G|6?rMzs{ ՜{ `orGv¿M.ʶz<ڄK^)ہMuU(U6VoaRzwt9Ǽ2|4_ccPaf6 2\4+UUk2Xo4{P 2F?9gXQ+lp4ka]8uQ8)1RiN%=#FWRWr`,?F?ۅw _?a@ҋ(3Kf345mnJ UU/C&F1"( ?qR d(Xf b=^HFj>\?/Fʌ!vԎ.F#miH`Q"f^w#7!?bBq/3yÁWRL|\KǼ!xԧ5wܶ7¯XaRH`V?/*HU WUiF>s͙ + 21:;RsڲԎ"S;ߑ)K&3މnjx157XPWSi(7Jr{I Yb0ե{2! kk&@4[ q,.5Q ־ITvt*axU[?uİ2$3t7ޔ0ŒZ1KlNB *z5*YvU΄R6) u.70ANl>Ww=9L\"ӋOPl*dzXUOS?zx?>/s0oNjsLZv~M>[\߃41C2ɤR`^$(ť xŀg^fs樰LR`r>-`!yس{߼~t H,M vH ݙ)ޭػ|oLI1n,CͶQsR;E2mRj];8]Gd GZG^cG)1sǹ(TL3l>Z<IBxiI (*Ra*c+Nʍg,lBPJKxA角KuuzF4]c~%sY|r|=οfn2=N*;qwJeb|vӢ|A\ vp]ݵ,>`6k!Xŕ6-EVeƇ(m~pҚcUڗjYKa{Gq0Z)U Qƙdb`oG}hӮwwdJ#^NbLH_O3SJmtI$Tt[rҰ/֛X敞U4|%7\(;وt2xw4Ud*7L!C WpJ(bXꍣ 5XhǍ,/j{Dװ(DJ,G͔B٨=Q-@i$ w{sjf*ȨP!5k%F55+VWIRI]O*ĞB(͹,Zad ȝ$QaB7 r3q> @!΍g!&ſܑ-3jrtz9>EJ0ZtǠ'YlnU@idÀ|PᜠEj.݃h6V Gv_xե>|N6MZ!n-X]yihKhLϵlP FtRNg}ݒ nmHw.I2-gڍ0Dy%:}n'"vR n-Hw.;2"e _?o<[ԝ?1) s?{#&w2&UV8q{pN!NIVg,c@$ Ȇkr8k|P68XA SR( ixrZ< SNedu{3'H Z/ %`-e (L+ĝT {+"#,8|bjN oW|qf}"bQk<2˥R8mvF1`l,;-'qV2,q5Т0<0#CHZ mF.ZaRMV?Wf_=H|fPLP'ljtF%rvdH\̧0Pc+1߮qS T`37V<F(fc,W=^Yq% P|=Ż[qKC/-I?s66{Gq07CJJjʽW+ła7k~1HGc mؽd3} 㮰8&B5>~JR鴱Q,"Z( Z+YAjD%0jQ;7jt'fݪ=[ź#)/b )ƣ 8>{l`Zz $)KX(hKڟvK&4V!!߹ȔBg|pRo-<̘\"m03NJkimYΜ]:_R凩8;5u撚L9)mȒV=I/(2u@ RXtݍF©N)-fM0櫷4@aٵQi' UC#$Rr9NQZ̚RD%ެi].ZicrZZQb\c$+V2U)Jo$wߖ.v:֭C2PomZj|HW.d {{֍aЋl[8m [Q+v.a1vіP؅ EL b^0ԤDnYs˚%JLa a c5\1Y ASNM;N_b,XVEn/*ryñVE5'qx7C.o8B^'[*ṩcHX.;slhˍy5Mr9ޭ!p;|ZYe_m> Is@"OH]ZB}H(>!𶁄D&$%$MH$RTӵz8V/jB>, vXV˓_QbqJ{ {ĝkOA'KOD%k'?[|)93(k{Q=6Rz뭊58b="1(cQ{ &f9sûo[_&;sn@tp\DM˧+K^js_ë8=@&$*Ž['*M2NCFroh2sInGHtJO4jaV8qE ccЉo0qbxPKnQIpDC!:/&K2Qճ@`i_@Bx{}X&i~%X@H GzKS?uzS۹~^o EG{:^\+Kth!z>n`\4^GwNZݡbX(6vG]JO}0߰>>X3-| g?X΅a&Q̆C0I 7TYyCxrx d|El(}BB POcn 6Q}0EURʡ6<6s(Dީ\Pu?"ǡĬ(H?r_ wt1);)|wqXuo3V媂+p@P~؄_-C(fA%6-ICp6!ENnnjQZen_6iʜݣ,lj KQ!AO1\~[ \7 C9Lb!)4dџ ,v" & 0a!!àƱxCG8H P#0 Qc2޲6xB 6$ǓPC9&ǍJ`kސGIys(n3= I%uO T04-Yf+;LfS}94ZʌQ:?GL n`p Bjւ6Vk0}W~0w?#tl?xykz__9ؼ ʵʕDRPZ<$03J(rdr$y6,DH)|̴mX@資R|ebV-lay='3dfڂ_IJ)EP@,Ϗ|"\$˭5{جoxۋrvPºkVZM#Չǟ8Չxwmkmvgg.%ko;x.^ǣhMɬFjkbT*`R&y4ɌHP(j1v& FAq(e2PM&*c k 0 (fJa(BPo1teoXRۈr]1CG6RXBY5pZ cyWTa֮ EpCP2 ^B^zx1S,b[運KX:vws@,0p_o7X,W ~2Յ-LF^~g.W{~g_S|1s[CoB cȑ?ӕO|8,̍u _m0sDY2ϗ=ά-$pԴ&5͉!V?- s"^{.p7eg RF hi"RȔL5D[:2 fV\F;xK,zxe%4Y0SϰL Wvtª3Ix8mHXMj;YT+ҟObn\N!vhčP-n8̄&$KE"SDU OI=be, +O;O"Qy:eZlgGOǓE~2I/v'^!da9ElÎKڡ/$`2+od(+^H7 ƶPPͅbkp uQGl &!h|oo8H5-_$3DV?NT<YSٰi & .ym ?ʅ}Ls's!aRĉ]  kkN|LEf]@"W&0IT1 Pg;Z+s`HL<':{e)+nyS+XS2N *KNYNSdB1e([2C]35>B~JrrPa=O0Gp/SLBcZnIRLCHމ5G s&v)0T.ZY(ayf^Sœߐ5EcsqA?5UjafC6et& "g@'#A(RYҹ.G)^A~ dؘ`0WrfS2k4Dh%/3 g_Ŀ[~As(ğ,?b_$R,OF+>Xg`ndB)J$˲};6IS40㚥d0(%::˘DJ bͅ<ؠ설qGps#PM\wVhpG;b`ڸ-щ';eᅳ3"tk5k^>jTџ\q2S2֚e zN JjNH8xazwA_t8嫕HߌZq1Q{ՉG51 r˽Oo(qO$0UF;=ޖ,dlZ N\yoeXpyCޅGye_G +'…GrkdewAM, gFV3s#cV PP1&]\N?pY\S~nT՞gǘ[e}?N .G&7i#l|wc[:OAX>~_GqN@D}OP9Ery(T@ :[O:E Uz[ `<RP) RӝQU[YIwNnn|`ԫc隈4^sψ ~aqhtt<<5SGeq Ц0A~6M$p e6Ņ(խL`=F̃I'2P; fB(tP>H_ŋÒ/Zid7$sWԑ&Aw!9fN)5Y"Y".@,zdVgHɰ9V\[&iLVDf02"S$c&q)k1Xʆ7Ol1kY((o UJ#0S jd hPmCXʨтrI+w_?i񊵬wgLƳPBz] T("Z/u(J@:v#5%Uq`~ԡ%LN?>l,U:)dX.T[d%(6]YP7ћw^,]Re$YفS9k+x* 5{* 9v*6""S&?70Ï鼼}āWOmf}NYoiL\Pd:n3qb6+~͟­ʲ(~h2èt -Dϊ^3*InїjCxrz4RdyYɠ L4QDeb)]ڐGGs) 'J Uxxv30> =_fHP)\HZO"#2 )LfY:i%Z.G@,L3(IHI\&"PAb׿2Pm0]?Ԛ9Nr/V4] reri(JxyBRCC0G t.Ǽ1NZw=pɯI儈r ґVox*>܊wdIƺCr>˕;N_]ŀ^Uq߮~5H uX u_`qJ`J`]xS`I(ˇ-UCēVqs7DN@7@9)Wxzw?xdW*M+< U#+A#wQRP6Ā`R ws!mz/%(T+gMmfrO S5 W*"Φ)՘je8)/}R/%o<]?_x +0t|Z"o_{q8Wb4npB=#34gy\ab<{V0kL{7?2Bf H/fU =!!dD1ʹnSSH]ߏ|Q  0FdR[0yV0z5%/~v[Bp<SH*ll~0LjjMxx>깆VN r98}e d,7ü{~>D^y_)0_AaF6[J=7_E,f,gR3J ]qQ}513(Rmm("N5½O^u0[+`M/ç 57ˌk:] N)n43Q eRn0Yl}Ye }mUhX{C0Cv6"|-pnq'^/-D`V@⑊E|"#,<ʤPXxlbqt%1y|xX%cPLҰO dpP͉f0j#& ZKF9J䬅G2I|@$J6 ;)WrOԱVh 7Gw<%WA^{ѻ3P)tXøL|AZz+qKo6*y6y*|LE:O-[¼֙G(Ǹ@^y&I(f6<^q[04e:)RS 18I@RO0$Xn]:$'=X%kzPMgUZRHݱl4R,k4lF#b4oImHQ%IQ6RSXSRT'0FyMHG4m1:QcfҘg)i 5  `E\D&vi ;4D2*ZBkP3jʹtK~&I#gj;zP cz lHة9XN1_/xZB{v1ZNMY0۞/,RMȍҳ^]XlҐE4KN;Fv[[.UD'u<~QM.=ŜRơ $>3}e?_!wy.aܒ1,[![qayk?\>on8^~~z'<0/ĭ&Ao9w[](GF ӿ /ut.mɋ_@>,7IaJ0C ބ7=u6{-BqiClMPy@N'DHQ&NBsd{d28uNFӉ.\RS%`_SRSyNЪRm`Q즊<~k.,?} ?zfT_yxRgsT#_RIゥYO/rLk$h%;έ׵<`x%CN[S4T;oVWj0VBU*rZyN<%o|Le9nmh6AP\=m9:SE͵@P #za=?g#;!{f8+]9oٺs^FY|Q޾oTJe.[w398!q5]<ߍpTHGOaeΧw,UDѦ2q*4t|Ke0sYB^Yc9Љt_׻_޸.WP:P5.D NW[ 85N뵭C6* O3[UD&Vi`[imP6ӄItLG3j5V^;ƒ1OY Bh*%X :98HΘ q{-88kv-^^:D_g/S w(KN& v18Jm;Qi`ʒe4нu7- 6X{2ITqbBJ\0e B3=sVc "ʇO﷏8+b4npB=#3:z=+~]>up_ߒ~ A!vwn~Ҋ2Wk)W- qJg`D˙Y!B"ף`Ra9IӪa/ML~~o4^^}#+U^;x` n'R0?Y]W궻|~ X,ULs Lf9vh24Wjb+8MRf\Y)V D@ -sЋ@RfC详Xﰃ#3z хlJ:Ƨ0U;lw%4! I7t/<`8;$Id*0Lfa {L0=P"A,rH@~=|K֨q da,"54A!F, Wp>ÞO䔝8hN">Aa(jdE1șcB!(JNIfgE Ӓ2()ʝs{ѳ\{!{f6ލĂ3Y:TݢglY,QgcvYJv\,y-YLgaJ{"/x5Mja.M+[[St)=40T r`IS֓IN6^gMW$'MS]oGW}9dwC8%'//ػ~CRRC3CR awׯSU} k*KJ[R\-K]GT\mٰ"NɔV.B?=is-NaO)tqjS(ѧdJ;;S(z/ی3y!CL_U=gڣ'yDp5'TRsKh5g &Q%E<ˣdX.߯4 A\.j#oWeT_{ob u~B?}Eu/L'eY/./*&1gІ3/BV{Q>kkZ&eo~: _d_|YI#wlrTgt8%і:ǂ+-C~w#]ȕLP4o?[uΠ"j>Il1W,Au@$U\_snLaLa}$18ݥ<#e\g1v|MAHac+]Cܿս6Uޮ߇rUqƫ[XZ}.Tg?Uwy\𪍒N:c_՝h >X EYj<ljD ˘4y#2d)U$WZĞޒKRNeIkCd*-Lݳc{vxuI$y3*Ps}oj|}y/QFNQԫM&>{iZIN|HKq?DW}$' X9!Ic-\ lh _;B]4Dg!,ycfS` g*t8<\S2+ L#9 2dȅQBJSe<,ʹ̒?"5 \?zS0^Ӊzh=̙b,b6s}i߇󏽡k< lr-QVk.Fu ATz-r`:+!1IIY əu$Gqc*,RCdQn:Q#Qn:tDQ}J|,ۡ%yiw]-)#BeӲ">e6'`7Z&>-)SFhxQnQ8C[4f@}}{ے\,h$$*ssisC1+"c4cs2;X@4ר  ]=~14=K7~oH6AaT Ŕuk/5wR#)"gTqE5@ˑXKC{pR鞎7)FIۉl$C=Cæ^# E"OQp#28}XOvр Qt׉U fiBFKhj|q #m2 4cE21h SBEriNK)=`ċl({ 2Iz乳HN*KbN{g!]nL&ZYl4?1I4HpƼbݎoFMHHt^܅p9?⌗k "?:,l1cImd9$J3#Vɖ]*Ė}"Owh9=Hbю9vI$F66ϜE\j5lPUS_CF/ u^A| Gۯnomvn*/߮؏z| ^ZyɐOJs|$i}$2CPROI:L0eÔA S0\%NH$VXhfBRGMAcH5Sg19&0NԨK3!qS~Lf$*uo94 *ATheE[fJ2#GRx%$yeěh%ç8Ƈ5榓,͔o4T'2@3nhߔ$%7FAF%vcSFiG9&)D MB#TgQA\U \8sʐC RDsŔ-)5xgܖ"ꃴ (j얢ZP4պ{U@E+0T!㈰;5  9K@;1k"pE,[!E8 ܫh6#?V(ɼoZ-GLk4zjU.ڃP%,T[eM;blbYFb&b"uS#PqGePP&! 0؏\ 87R(BH+`2(wXu0_VU(nMqܲ[,Q+|÷߯G&x) %{/O|V f_8ۻ5C8+Tgz?Tt6_1b4ok!,..vό>pQ^(RO ҙdrk8J+\?,c*U=%z{8G!M)W8O>y5cj-^DED} WpeS$r .zJq+yPRu 0m:r֙9A   )B60eE8!7!{F׺G>* Jxw-%5 QaЀ0 ^Cª9?V/\ Yd X0Qp6NAWAMǩcB@4;15"\@0P0p/`B@ZcG61mdQ5R4V GXm3ؐ[P(rBh㿹sh,m\Rk|\v>?~96 ]︚. #xt_z+uЩG8}O쾇j0T~W,nebJs/pG~}aaOvl>{Z~ͯ/JSb1[̯39XՊK{;).;B64lb31w䈻%5 EWux@(f):@3_ozGc(x"JAb}$h& >8U!Vಢhq\{sQ(&ϰS  +NrLh Ҫ쟷}~Y`f11L;*`.rV}tM./)c!)X5 ŃUNMpHZFL)0V.sjr -H}_PzMIf䒿wk㹟BL@0EŎkQ\ZW~8G lԧ$0rU Qonz] '%o1ڧPC$df\7@RӅdG-y?=?@@Zu]W1+\Ku59ERbtFX|t\W4W╆|I0,! on9v̱.LTљ7F6nrgb6x O/N»j[{>W'bs"5-ks5;mf'qXo~ wڝ+|l^898ESi8>Lp4s&4"2;#:oAtp Fh͊{U4bƽC?p y_y3W6.)m!P||9;o)qmmbD6omkp/[9-F R*㜝~l^ 8*֨kxhGh2=*-sQόNVn=vtn FΌy{^NO6m;G<3Ur:k1 dtfvYQNVRRUfg9k_Pa*!{5is!.3mLhH~ c+GNOo5``ҕ+,@7 |GZ)_p_<p}P8FJum}5 .*?Q^7IXZi\Fտ!SQRmiQ(s!P>(ul)bFn>X,g+\ъ/wEC`׫VS (FQ.6,봲lDekcبum%>HV~kߺ`j[Do& }'5PDuB=l-:p(वa/I6*(ęʕcx⪋cp9xp)M̝ X.$k =t27[ö(0*}z(ypXiO(#$F+W(e88€{Ƌb'.alj8۷uwrA.?do'ϠUV}Qݞ^1AٻEݣ 1+0ɆqmD uG\> o#/Cydf| NK<ȐXurApB8F 57?El|Ow0&&.-XeIׯ};#I/Y,y0{ gkyhcU$%V1+"hSy|_ddD=.o̦E=+c/vdiwϿ^Ck}duv9BfZWG'Hc W:d?R&#d%KƧ`BIĽvC$+ѬH#Hv!)?K6'| 0SrRDzlN>LGF1ô?5VDTBɘ1R >0>^'C5@jޒ]u,Hj/[7s\-Lj'o l Xy@% rJ ű_6k׾8߭J^erͳGxI$`HԑN.W RtDU٥(;I5rNhhF Tf:$綕ɥB>T :luZj`m4,/?=WLeե->Ѩb(7 O?m/BkkVo kVo7LyqS{GZyuGB9$pdbThIhϏ%_߿ͦ!y+G˻x#SY%HǢls ۗ&>ӅZE7.~늿,J w9KWXLӟrxy 껓S9ORv]f͸q[q[U:CsDMĖx<| &70AU 18&QQPҌKmtQ#:3mW鵭v[jv[jU[*X@ $FnHЛ^|ȀCEXe4AAY[SJƷivdYƱ1Ơ!le[wZ2윱ss˺ g1Vz\q,2Q=40dP9^{qR͇XoT>hȫ7Vs $"–;)k4-7.ꆭ Bx^Ln[1T+4,Z Wd>84Y} ̷ (b  3s$mj ]M>,y60`%<=G!hۛ3na.He?~nI6WX<5^"]nF݉}pn,pkIDuLDf:=Ho :'v^'8[eׅtz?i2Dڞ_&X?=yZ:DVs/S.q{+ SmꜸbf n!N<,d[aMMm뜺j`Hs ē_\/8n{a9 Ã=蜯{Wk|[FR24 vN%)霹J۽"6.6Jul4C#tGOVz;=ZyU ȱmyHe?kAx'TZfTQ},K*uO>4nDv$Eju]n1m%v1fyL tLc#@D3WF؛R)- ٗQɸ,NWhYWhE?/g9J\ Pf1'FjxpNJ!JnwA6ZK%W\h#id5H*JfLY9&QPfRȕ'NPL#* MV r1YC4I%A':^K"9{g˷Oa\XS˳cJqf)ܢ({.FiެSgy{39L#&쐉µ9TyuS}[|ssL(d 3՝&%/9CaTXDAFAΑIrbL,a%%/tIqjq79Q؍kNۊtlc8 C5B#+܆F&y &ϒ#;^Αpwcbl%Dwke5Sڈs* $0 8(813'"RoIs*CA9wARZ!X"D^ i >Hl #5 `qp,(J] '8Xi:Ҡ>P(j`r,PoR %;ì *p#! 3 nȄ 0)ZfҀpè=L ƃbm 74-2`Yh[z UJQF, N:i` :@qd{B8 d#˨A\:U5Y&U%4*Z'5s ,0.Tڟ~M0Zp◳2ҳTY^fF_د_(q)ͤfdJ:S2 uE4 {c"AP1j%3DǎEc8~80z{kDZNJ֧w8d%m짤]a`xt#pA=q $-1Z-T8} M={#Ǯ}y0XzbbbE/.Vrb..V\\]J⋵k+.Wv4\X˵5\X_~bbkqvqɥZ/`4r-"@*(jF*gT[: GTU^]̪AW/.K@zv1C]]×jե i?ڰ߽h>}Wäo# Z W\~ן{+AX}}l95q e"!|T\#7eLl\S.L~ͯ|:)G&rM|l5ps2tVu6BE.W&6 \f"822~MJ6k~_"ۈ}~!r{~3M{ zf :"'tH }B eV煢<@~cX]YsF+xgA}(B#vyv(D7ErHP ) uxY/ʬʚds!hQ:jn ʼn./pL\: }Skv8zZʳCSe3xrNPǐ` S8X6^.;OA<2/Aζ8 oEST%ڗI{6iνBi$omPMd^ VX iYg,H[k:BNrdW'nLur+32a^߲ȱ)?SUq?s;B70TOџٻt\wm4Pr>b9gj oefU/2֍o,`@H-8s و窐›(/9ՂکkQݺTkDĆX.1S.9Tڙ}kQݺT rݺ`TKٌ֥6Z!j֢uͩ&Eu>SU]궅y6Z`UA!ՄN/*wɷ.=b:ʦ0wlaz ^WVT`/z.g-\xYgm3fvgҽsm>Sۅ+]B6ƗqUYy0Ni`Uշ;|Bz۩E(k-į y"J:xzź7Šmc붿yS3־oCօrm%S\{պ{nm1h":mXǺ/~yvgnhݺW.-d!>&bm٣ukAi:mqSEwfWohݺW.md^LPc֍NۣukAi:mS[6Z.$䕋h+Rs|MɃ*Šmc붿-T79_#D֭ y"J(?+֍cŠmcS)vfڟкu!!\D[ɔfhպZ݋=Z4mcLpvgڟкu!!\D[ɔ@pź \;)t֭-MDmX= 3ˆ֭ y"J>5ȫ֭^c֭%DekY=+Vh֭ɶM[ELa~ljdZמֵw `\ g5i t@<@5$Me;ɱaqLk+N5A"yluj\nZ*Ii%}RVf-B'X8BGWH(ui-ȫX GGWgJhMeZ7 t Zs}tdIs1w Sr%a)ǜr̝jrRr)ܩ&pD.,RL9cB;:@ac~ 9fԦcH0r̯! 2%;r̯! 2%fs1w _Y]ΘS5A>ĊcN9N5A;Ke1s@Ǘc&})ܭ&Mwo3LKS[M}昩Fis1w Ty|9fN9cRfr I9cT(ǗcfLcN9N5oZb9f&8K9cT&)ǜr̝j;gr̯"4WǗc.SEM e1F8H?871Y| Cgk3=' PrP|=_t{W&B9Ӆ5 o"c $vi *k\`NX[  \\MV !}̀!ɡPn 0cC0v +#^ȿ[U\qĨ2N ~ ~KM2S! ϝpP\= Pn!},!E$CYĂ b\'0yFťmЎ iĊ@[A&!0q؂L C|r>4eԸϡ bc4?xr*;)&JOT:ᇟn^s*@8y^Y7^LKs59DDp~gie`:+~>Fg= 7a˦ό<H#EnR>]C W MĽ)Oy #sI@ cp msD6^kO#s5pyaXƃldok>Ð~ A1)(x hLnNOOBNjXBa^f}`55y¸5D SPpOs`1Ƚ WQ~Y[zlb\ u /|V' ˘3J7QŶGʦb=k2л|_uݜnXR<s : \RsfAENXʣ@$7/9rvh\e9}*pOQ#O!'0b'b~ש[s<: L'Eϡ :rp{F#Dr kH[k$}SdB&ILGw6֢K0ui^U& Ao~g3v"sLԖ!g?EƹRԥi# ]W7b݉zˇoPnZBYHTceXA[v8CK!%jq8D+W cyib}2_6X=oLJ|,|dsrR%N&jg\X[VWr"hdԸs$?q5!9W&` C$U<iSG*.#ԼWjB܁ah'MVV0pxO=" cAni10F`Cj,R΅uR& %hb82sL qc!9@t9DĂj*%X,[68ZwM' 8Paysn4~q4KSHg}W :)'zyogmhp|/o4( Of9gڳhr* 2~/:]K0~Fsǹɕͧ-XDG(nrfO&&'jKjWJ9W([$<c*U;KPHmАp%MY>h׉vsbn}MauD0j4HY81%ShR3F$yx`dDւ@PhkKAP@MZPJ$8[ NtH )SG<ir X`b V;'NYLj#z䬧X)%ؒ-6HX@3szRWuoCa }7}}e@U!Yh8>.欱ѪvDÿFX=\`9a[ űP!%$ݛ icTyİqca dZ!I@7*77PM9_7Ltf5(zϾB)!QV_'8>" ^Q_j.'ͪ,aEqk(F[}Lsm,v0@h5ob:_=n m Bshj -\f@A=HkK6|' Tk;2+= 3};2z<jO2L ~`ɒSKL DM?~4zmp~1-.>fǿ_pw`m3 ${,|8zJՂRR_9dXr@D,=ԃ^CoKZ#tkplj}GIqh&9cO*wuj+c{{ҹEykɑҌRAZ'Q1|wy-`.@9{lݻVy;7FQ_)%h;* '˟̒Ac<~`.:&eOX45Wba>wwM5:o+M$t_rw|i6\GXPełjm? k|ڪߚn=ks7/{ڡZu{Sݤ_0bYerl߯1$pf(fp4F?!O>Uϥ؁˕>ѣ:K? _P1u/s+Qy/PWUJ6\w viOY j+XXdU5h1x'BҞJ!*4}-x~bj:, +8GƈCRϢa`^#)/;E.sp&{Y~6CE+= ]Ge&5qr0)$}4͹!7I3F8F9\0ͦQ=,F~1OG񇎨Q% wgB-.{x$Oy?5.͹6xX-y& +}(=slh\3O.ZDAӀwdy!L7>/Y2ZgÝMO:Whѝu! ]z-LV,nMtFyxX6:,ֺ{~+޳ZpivTRct6*^wUslA|ɦDSz;vtE-#,2b=-YJz9U2g:SX#<~:vSЮ:}K/%}\# I+uƜ+汧>vՁreI1N̮kQ5'~0kw6)T3Nj'&n}҅[ӈ(:5e/4۳C35eI0{@hNVTJ 4z{,AmŒW""N5]X~~a|a|V/8ӂAAX=d:mwΨTjxw\ͯqvI;[{aTWU.{M'ZV "Y|9\sf׾2i,dƓopKܾl@!oڪJ;VDuN{cT)߶T*Ju,R= ߬9T`ζOMjHʡϳVwHx?Kgw~>Azt6!CCitVRkȮ!4뼤1H *b3(/dERad6IfØd6IfjYRB;aϩ"_Eg +dh #.a}bm8]"I ' O?$Rs4,aԿQF{XտG8 +&[oƍRR{fB92hh:QG3B\zu6Z&HimlmZ g^j 8(6!JOᇦ?[ I۩aXZQCuc2|M & 08|\.&v8~>R-FtaTKU [ƙ57fHQɉ/#֮9pƒsVrϒU5R]N<9J}G %M]l!8[qmEeiFܹ *K ؃"T1"M BtwWm D}jҬ#]֟$,zGQG՟~Nw -GԨ`~G̍_WW}P(E)'2 ȸGX\ѬLv4M>0~R<h6i<qP A Sz$&FmʟE^Q.98J;Pf5d 򲠻dQ-nU= +Ė9[f{4 Spf@)>%+ՑxDJfs8pAj!~̦NlO|@K(ߎY/n&az GUMR.jBz8 C.RuXOIɑv 5׵9ocƒ{*Ԁp]PeQpZo/O`CQ ;G~=\:ml~<og27\˨oWB^ht}O[FpegZChw J- =9 {x(# Ace ÃvjHx1OszEiq z?m]ztϟYq?ܿـZmR KoE*EA;P9ds~^{ N]Y='zc-77͐9TwM%yH umn||l2[}r3VB#lA$V:S lрK^9@ݖoB2R.2G?\yE=PfJsw8}vE@sM Hh$oWVha#ڹ"Wy K]PZS3H":1 XV@ЈVEMCVZ.wo%PǠ`rJCfF3^r%Pֹ(b@ݐ$Z#$ه, WlCK}>HPn0yɀsU:Y='zD!AdZ7@tOHЩRjd{Z_^uNV9YK"0H^5m.pUCID  5E8yԖ I3S AL Nz+ (JL7kc:]JDcJ7ƇT8&"p֏W}1z2?Wg*"f.bydzmҦ4fS g(4cի2~Z>B-|:{:nqjDl袸=JUyŘkc{sj[su=1!\r|,ڄglI7 [SN9H'N&/4U!86)Y\5mYH8HasԑnOiET[ㅦJ68gѣy!.zl^NUdopOG^ݫ}ogWvrq3/S_D/Hb$\}5i/F# 5;L̙x4/8H!|B+{'p.1 nqiCE%%m*!]%8ZjԒpYᎤJ ftHДB=^vǘ!Y|< 4NFO,M SUV [I}fvku\crqwkB r{TTw i.޲;y="@ȓ *-?ss%źh`*4$P P/@ッ"  8sB |^B\ .~z!q&fM[p~7f!ؙlݘ9֒QMޠEbY!!1(ΚX_i}QP ,x뭹<0y(:@%(n9SI, `A<~&h^01(YD j#apD;ϙ(=ވ\5‡BB?{F俊?ff66n.; d;u'ݯ-Yv?$ "*b#f6Nt= ✁]W^s/"ւ`KB 2$5 X!h!]ICŌo >}]º OVֱzT`P[O Fz~Zb;+tڻ8&KHȡX>{7pS M[dJAlaK84[*1pt1VG/EPζpeɔlGt):W #+8}g'T>长֟)~*uq :{ѵ.oȎaIoOM]]@ڨ8+֌TjQr߻ܗ#%Ȳ"#&0A*ʛ\$e!wDH[,Ƃ=cuOZ ;}Xzc}X䒳z29s WT'´!N/كxZTJ^s“m2 m{c Wr?׃zCk^˷N2:,%Q-Qc`S8 rcWcSoyuͿChX5-oU( >AFAQ2(Zjqd:pz}WZy7Ͱ>;_Y>^xH.A֗`F2h{!C :7QSER}dۇ- B쪸oI3[7 eiU JKTH$a|XZYJFWes*GO** f<9Uq^ =5zrdQ"9R3DDỌj (S81J4XʍekjQw}`ʂ`5܎u >T $~>}8$JQ/tчkE`ч:$䙋eb03e}5R)Qk.f4WH#hD(ds9-y-D(E,5Ø4ю#WcP;|3,IZŕ14D{LD,lJ 0R'F;llH EH& xR$RUl2F3Q+[n/c\sZnV݇.}3$8FP i.vGĴ~~#πp;,$.Z?\Zodrj]Bv^Ud`azmqӕA^˜>eՏ\Mേj7fhKU٧ I1tBC_/t嘮ת s: k,V*%2-'t_ey h)R1H&x'Vj}Z ЭE uYV%ҷ:YP)/R%ЪAZ"} FfAfWŸCLto` m"twkL*`k̙uTȺĬ{̢*s^'Rx.C@-i.cMEWEzR|Bhn sbapC_DI<hVYX$Ifmy>$aǙZFZYX^`ҦlL-Bю>MApBB!A-rw~v)#l0{Phv7'2+ OF!gaYs'J̳vl[R 4Shd4BM)5gHL$]'K$3 CSM \%ilɻ}v}LaV D<!oc#x1kp@MccFz/LhDqp;)2vq0а}r+9Ms&ŽӑnqsR%'im W):*'ڳ\q옜o*+NawxG(\'w?$'#~ =g`c PrT$T|% jw0sX|6^xg㵟X,֎2FAUBb!'D|b&BI]ݒJa( .^Jګv`JQB^(k7uH檘_,qTC*!{A55FT`Clsc)ĥ"3-Ro [bnQn@U`F0 gDmHPl43Qj,VK\hjS·@m%haK}@ E~v_t]tڝ}mVb@!4K,r17Ta1a*L̘qp( 6&*v ]aM_NVA ZDGLD*fi^B1&L#ATԟҢ *I}J!EhopJ mmB " _)9'B0kRYF&<_mI"QmX* E ҼPfX0q[|`IVcJ(*w ,nIPHYRH ]=gӿO*[ܯi]V0P.W$pz:Z(RkX|o*hXڦAmp6"ce(2='258e%WBEHyA4SJˋHC!N q5CU*AH9#btd8I)f3rǂE; ֊cֺΒD~ CS+HTGk%V(Er-.r*D0*m&h8 z]M~v}v" d5sr Z7t䷜C9k)IM69y$K[c4ƍ X@#8y{aqXsrŚb~!f4u^m9Vf9|o}v7q)jiy_.ؼc]E6V+2{ ?CƎ]xiY<~`cǝ5=AwI9睬:9ȇɠ7w6hrz@Fz튦75cR%:?,.Qw`y河+(ZZsa\%,/h-0%'|;J#RO InqZs œ~;;3]m fp%{M,d"١jp; o9F>/hvbaufk hdC?v9#5X!FHX;_G)jceElcցn┧1c.e| av8ZnJ> OJA|=/q^p~^t'c옑u߇OÎuSY/"WN7bnzBP^:c2}..f-e eB ?)Ic,16&J!_tĿGŘ K wtiI\l ; &IZ7⓰:n~$8pjlDjÔgvٙޜ۞aofƙ[,es2:gFE^Lw"1N4yv.f΅L/&@.~M.ގ` |~v5OzzWo)^DL7ǽt<{|? /~V9A/ ]Ͱ2Sw6\ qm=6O>wNU qK 37}v|>Xg:ߨgᡯuȞ[vX>zKboa5X9}-?@+~|kS՛e|{ R~;}|l>ƫ׽7,"vr; uF(hvS5~鮏K@g2122٧ ӴH ƌcx렼d0}?u2иO05 z#- Û΋rgӳi뢅)>Սl sk^fيrjb'Ìr˗ ,Wku>&Yg+N$cw:~/]5?l ۼb \TG7bW`"XރrD ŲN*Ѓ:ݒ/V*1Et6&H}R$AIJތ0U`&1aOאk"pd4:s$ $ɊFj=,UiCPo:AFgz8_!23v+<a֒ {=~Sl"9=&Y}Y]]$MCf2ȸ2"y,)`hs,0C)?`<ܚ ȵp?yy)eZ1 Rq|do!4Pde, L}{G •K^&S.B<̃2YsG*0Qdr/ajhbԠ)^Cd2QAL` SȄ<+G>aS4߮iC9n or.'srr張"|O?+)C*VVf=@)#6n=PCŃ=|9bn 18H,->׮YE5uꆽ  DY?`x ia/[aC5Sg?.OsseyI}6BaOQQC&XdzYHN'Ο?з7D<)D<)D<)DI BhriHOҝI5wU 3lT~fKzѳQHIIzF#dcUEtm, ЮECYK@.;M8MO%h6p[&Kz1x"`Zƫ(jjĐ 74\r&q9*m2 $tHENbSyj-IkڒZk<9l"M+V$b.dMup5nnI|d󵴤Q44Aӕn]J|!g=ʘGuceg:oF^TNYL9i.?{W 1$upR)']$GEֺ ˍɖ1 4X45iddu[)Bh$f'H}N2&jɷ.Ê(ټJX#E4X^fMy*ng*T/WPj68~8_R y6H5 Yn路(ʼU9X%.*MPk˝So ~zu/惾cU?./c8cJ,ʽI,{n6M1W\dkl뚽_ X[ZYܑr.g?gY9J[;C0!Iw?ޝӏh *OM*wDihVCza/I=֩dCdR(Vׄо;HItvݪgywBHJ 3>Q0co8!d79=9~Q3!dGxt~2V(}Үx-Ԟ ; Yd-X%^l"Ǥ| :HqhjS~px?{{ЮTibC߲{rC] >U?We4a)ׄPwMk5֞'<򟰁dSdU`)!,+UQkBZ8P)T{2Bc9WJG DxCn1З9F&D* b۳̻A>[n[N9*U5UV j>Fb4!I2$MYH3M6ƔeKZ'8?F|*kj9I5.j(Ti+Hzf iwqὕ O86 GQj LQZ_WYRWRf;J6;+v?SD9 QET%`ybZ@wbí?^\{;ޫv}1}o=.DҤX`wy\]`$<=U.DQcxdۮl~8:QpBsn't\{0F8(8Fg`mkA)Alӽ;w;,xBakw7 fοq{uH?m+oi!NKE5JVzR#x':{zֹo o`[7psnJ~)7Vv0߮O]Qz*:"V&t|s=޷x&z:hK[f5Vds{GC3R)=+(-,Ƽűj >*?11 36*)ecS|[֚ t/]>خCTʑR_h9,-K5pB  2T+py-c_UU Ntn&#њ4LL夳#- kdZ^) ǬRynJ=kt-h.Krf@#$΁&[y˘1 . ubnk`Bكn]E?);n,}3qZ}dQ޿РUzXҕ6 j2rBݪ2/r{]etGG^ΉN41 Yt]=ҪC55v [b 9[* :x*zQk 6`;Evb@G5=[{aOõLL>"|T#䲗ID ,vM0FcմXwynʘU>L.3f͗~8^n,z?̝~ks4Ot:~*Sw]NW0/2}]xΏi4ZiLjFj10y.,tWչ:W}(`rXz쪝ȅ.gښ۸_a!gϮ ~Q/>T=S53(Q$š.4$s!@˙*4]w[?,gڗBG-ԩӦH :q童*?/~Y~'Zq'n_gY,ﭞOG_;͟\rο=*^/o6?_F)rX܃/_o) 5t_IbG/Ww0xŗ)O]4}j }3\&g_#)\~׿@4 g&$_KV3GFga!xvmg?Y\39\lnkO63VE -gKavz6[7!D<}'V16z9h|渗_cxuᇣb*,m(ڷoGMŽJ:wcF2G"̱c^e,e𡱺{.ղs;1P OEJ;Fn2ЩWW(l^yviJ?~S!h!/'q[lșEfF7ѿ)w>a_rѯf\eN +7MƽOaLf㮥*< mY2wvؒ^f_-)m4"%1H1*)2q!))ɥgyY!oقfBPEnBGG|Xl75|.-e3|ƫYAJN7h)+RAƜӵzB]/$R M84:(3,XHmhcyi٥*ta%7S+wqdP>y7&@r15?MOnj~*O$`V\"ʌJP$ 0d$sEQfvPv\]K"ki٩4 ; CqEyZjm͎VN%`OaKg󣴒U8li0]F.P [sR=–.l [–.laKn%–%+ZQA~U1bSz ʞ|,AFTYRCud2*a*Ҧ\۔nSSo$/ҰZSz4V÷c\+]Ƭ'$Ҫڙ6 GmUmEYUU YU뾳:6[U VUfoVU=UUg*ZU8eUmֺE0(܏'wqd+U.xj'b_&c ^:׸e­NDU~p p@~KT*2ȋ"&3>F892 B[xI>iAbaT[ hT0wTZ8ٹnGrymSW oE!F*hl5jsѫjV3hI3GPYD} "0ncM< W(ٵhRgߋИ©gjnIe< X5E6ci/P.f(=e@.ZHliXč(©F } BXZ7*8*Ql+ 43q@/ԅQ/5: 2g4C.JFNqD9AX)\;XlɜÑl/U SQa#R(+02JM){J嵉R pHI޻vFRXicnE&TfDpZ}h-1E=[;;a 4?Fh8Y R(׹;o34fmMе K)I}}GH#DIJ=s{4qaA  ,oH0=".GKTԻtdڪBv:0  Zd4';1ul]arᶙ&cryZ6j"Q\Y6auU9joz` `;,2m )H(KD$cq4uyUA^H]1g.U__^ bC C.!s[^_F(Jq,`04U6bR.U ;q_9N>Ɇs4e # \Xx2akS6\uUߵfù\.2N\YC')G:Iut3Y:׫gO_a3{\>LsDsWe9VG<;2iԋgc*vףy2޿Gˤ.ԭ*uУJ Z''c"å&zWA0'J)67{3c _S F*YSa` K|gdtr_ǝò 8Nm(p5=GO* EQ|qqІ!"kN;s*snufp QnVm`waNK8~:w`@b?RPQ@mj=vssI^Ѹ;|z,)LL%{8> fa :MlRp{]}h-i,=vSLH$0X*DŽgW6Tc9eHB@U q0TĠ m,dKIJV)6,1aT9 ؔ|U'2tI`xS"4ф mo҅ t:) **Ow9FJɴi {% ZAM(mwkRʔEaSN}+_4TF<%趿ZeVZf3XpWk{Xʁ8)L t S+a,ő+>5𘋄K{ ĥl7QK-pQ%Źh51q@4T\ Z)e2a#pWxֻ;v]I2<N1U(MA\$C"e$UB1X]M*: w>a8fԈ ,MQdTDiBb&cM4rO.7(vc{>vKVN4+bp$`z~.Y#?WV܀wi>(1O : ۿ(%-۪xv9m5V y4E2N&Zƃ(Mi}.HybEƯeZgC|I/! kx>J APk\86}rӻ[S}ÊR ۅccAQ²KC0bêKUwlWz qM3$4L 59'(VVIfcK0}[|/B&b؄mWV)j_CKRyYVP}TPE̓ *pFAT*آJ[(Rr*AcP^af{;Dt%A\8k.GHEdܷW< xmn/x+4ӗT:6دK^pmWR T/tL p\ Y/~00M\t]c\ WLv𞵯;Ҩ}QAEv-ctw~ FwL_,Ϭ*F.@qPoA0B:a ƊVAsw2V_T@W{{(Rѩ[ФJYrp"*ikşU+%F}դ+XE岭;,RF `.z6١|pv <-J%l96\J ʳ`^"?/&$?%,D2&qdi"@Of6]+zC))sgu{pJNecc-'mct g-K,{Ō֞Vi$ߖ_VҞ5tX_XZ^2%lLQ|Dqܟ^]dpOjOYcl[-ZaPTi9J]Ϙp94F3vxؚ6٪%?L Q2>|Xᨈ&^)$\ݾU2V'"C*?k+M65 xu:6z [Ɣ$Ȥ"ge G?4N&W>wx0V0-jeίhn饭;bBں1:BHAAJkmX_Efp{qm:68hbr$9kYaʯPd95a@)˝ 76&<./g_%Ⱦ~`襋?mwó9r U;xìRloleLofGZEpmQHEr"WxVN˝,QS>v\fp ń V:HHb5sk! ^F=Eo+|$r @mn  gZ <118&氀'm=mjNJ7`V݆ڭáZ u[h  6[PjSgxڍ/,T UU(>j+4B=ml0ec?0RPS+x5cscLyi~{Wo. 瞩+ܺ- ("Y-qi&5ly9[f:[-&~n2L+4< :Uv ['%9]tL.n.XXILg']]t-G# aV!b[2D+౔]1)bMo'9΍rYgѓeI0x9"IBzSLdY]¨9弴p3wfxBNȲSyWx3.A?2jS JuͩJTDdl⇹bCqPiFYs5 *;"bZ3e@RS@Et@Ⱥ`LT Zܫh0o@)b*C6TZřQVz00" ҿ~ 5t%Tak^Z}x ^)x3(Պ1f،Ch6hIG8 2H,#.R=UÞx}uaP=a\N +r`N+gg8\{ds G2+EVs'Qt(z)Tꀢ °{jc6PXuu_%H=rsCdoiʹQha}_bz/vp?OB^["cs,x퇍+z6]],wpn.^džpю =rjL`᠈;`Flߏ_VݠGuI 88dCfA|2}tmrMgȹ@tz¾7{!4vJtG8 _AÝ|ׯ67Цh~}nsjMjg|=Gwv_oްjǽΏ~hp}/yFb2hO~Q mp9 ~ ݺneKYHsb_6OIQ\IaCH-tuyΩ_+[|8;³໚!B1Lw5k:'7&'.OȐ"H$]P:,ܨ}0i=쟇}3)J]>LajŽg/[.|9HT.}Z|~77a&ʤRyQAҰh= ;0A#P~OAϋQ8:pz&Y BݝaQ"|^;qœBAy=+]ny{nnxoE` ~$K;{ϯϊ7ȫWӃ^oD?aҭniHj)Y?{gF LmO@g] 1eζleڽM`v(~1o#sX FX+n25n󳑸(4=yBi .,&t#'㘼i5آX~/ ;3X:3'//QDkfqeٖ%g[mYٶ/T8vl|؂2dBTG㨦,`-1zaB~RSF ֌lld :w 6Rc 6`c 6`c .{[f%CYr?dez@"0l.WkUb(l9@K"HEJu&^o͈r"g>`fmƼ~RkƼnƼn׋$)žd) KY 8kU) baϟjj\W:u7}d|SVaw0-D)J=QeF[1 0&[VR$vhsv2.xb 2IsNXVV2/N UcRDZfZ $M]er,:LUix*݃5QMQ{ԥ$k~Rs}) y$yxsx*ZbvMW'Z2$޵c"EgP9|? dLhY\.R7NӦᄂd+l)cMIAypʠ]@'fZ|h;p9YWmb)TL 5bZ)EtpH@Ʃ;C(`O*v7*4u)L.}з.}0ۺ Wo54m*7QDF]{8N]-Q烍WFتFE8 ŭ̏]|vL"C3}>l\19GDTAU5' ^ f{!]_]y[& p*gvS 4aolZ:?H]jtNAc-:H5jm.&(EYb_n;NO(v)0irAc xUkO^ݪN>mu.n8G Tؗ^s$aϚK5UxEu"rN m7諎W M]AM` .VU(SPANYCl')R?8ƸѠUw CrYPMw $ZЩ6"4pȜEyt'ԗ 2nz]ߠɚ|0-xdz'&A&Y;%Q湻 gz'_g fhԛzPPUJSCDZW\A|bDNVĜĽh3lq1LԻ r+&8 >u:ݻN`l?r[igֽa^Fjw褿zݚ:~o~# (ՈN4/ ɠIt ʊ9X7V!O`q'!}S`5(L3b-7?F7Ƅ5<5uGY3p3.û7B,vSZ{ [Ҥ] 2=iהpffSq}cJ3vquloÞQW5Z9nߩZySʯGrO6Z5AcZv[;{IM^lu06QD7PXbeh &3g۾I[*B5?!2eB$ȇ؞pt0"%4dAYP ľZ1v JJdE['oۃ!2O,'gRPj/&$E`cPv-nfv }@qF|7Jd_վV8o69:٣"軫!>WWjH1Hd ED ̛ kV#H$v,:yw}392uh1K1ghq(zcEy11CcBʒϿ8.r9z |*%b%_cJ 1tJENNSQT'>L(f rJ{[V`kw;3J3'S)ô5#I JA1&PBH ]c%k.N>a)dR 2nEZA012g+F3\  ޜ<=?/R[6%IJ|8}{7ߞA9O?~w2>zޜ?޽=ic^Wko⿃|qf_^nGr9`|y[z?,/^ ([6s^(#3vkf?jHrgg?'4RVO n|ѝӎ2)$l)9Ms~pԯܦV?>zx50wfp%ۿa: XBDY|BE5cFXʹLa.#S(E?VՐVnkv7Y+r[+[ܮZΉrNwRHHđH&PPwZBSF87( ~+D<0왞n9 gN >m5Rv{qB h5YMma- v׶Yd." 3HbBُy<׶ Z"dPb,v:1VnжPDn$L(c',Ad >ӕH˽O +`AUv#l܃֎NZhM * P 3Бp$@[ͥ`ȂEAVhUVjkvڋkjVjkVjkdZKbS vlk)g]H 1TjN,h…dһb!1"bGIkHl8 VG 8O;@Y"4ۤSujI¿\|ZZ e0,HLeԐ]GϏJ aSr yh\"yN@B":JgWs:pԹt:#ҭu+AOyxH[5s695bн{769qpOLbg[/zتY k,֥ uUgQ|: 2zXC@It2O??^6p3 û7H/M>X&OP>{fC>,Lk,l@/Wc [ɯʓ4wG2 mk~ɚ?^umz}|n@qRSiB" q$H4+c%X|%٭`Y߁E 50ڜFD4eݏu$hi{{ *32?RhZMH}O昵C~tvo㓓xtdGldD?DH  iĮ/ x>ÁnwǕ~OXrh:Y46]ѯ T3;3[}bR@#|zr|vPx=u1W^:VNJlJt?Mz,v?\98nrQ?OXxh!A։,F!v Ra䄘g%:>_IeGz[|,||S-~49yJ>-|껥op_ײ*K0ke fsYBs߫y<6?.$s`߅ |a^i>I(6ǹd{XQX~h-(1gd./ ے|GuVv,`0YVY#G+ޗ_V{} ni%0->vٗ״qٛpϊp/OzSyf Y-|8P?vQxy/ |cG:tY#.L`9|9@:;> )Nl#avHʓw}U`/b z=b8w͒{Z*ɳ*r/z4nw f"^K5|TGut8 }'k;wm}Hsgy#TO'Xl]v41X߯_k%Nu9j/wJGA-DVk%&2IePE3%)c˵ȩ(TZʃMگS5Qr 7xwQt'"ܯrGWܽkܽkܽkܽ[=h__Kؽre*?X>%y{hKz@]&wWH1#$}Τی%E6dTL A"0"jQ#JS dvbyMb*,.AI܌ ߆OʩՓZ2U+9skS}>J6NPQXeJ-cĺ84"4@,6V ј2eC6ҤbQX US@=hYOk:@&QZ;dcર&]]U<EŰSICAuڒKnA'16{u լlRi^dUubդ! >ިZB<ע`5qAR̙XBkmoO S"Gߩ+OPLr-ƾUm0;W+Rݣj2Ja4 +>bbpjΪ^UHj_oPyKn,PDpS8G S=֕mG%HlצlhB;|+%H *3IJ0dEt!hh +mGŰWcl[/TKC-[27;h &Gsʁr<Ї5rtY1M ͢1[hvEAf 2=: ىv$%zޞkg>8i\;`.B]u9Ws0(%@Gk]AvΕ2rͮ]}H!!*&6LR;SpKzO@htն/J"\L tavKH*ʖZds%FBZ1feLUXeYy{|i.3`hMBЛ,$gx=aoFg&M+9}S(-ZDjAdEe#gQ[oņp 69V2) =pd&R§mEJ۩xr9N ݝFw eW{k@DOĜ"7,ԣO$"ɺyZO F;QYCR(C%ށX7R{Lu$v8UU+5Y"3u9[o$fqF^v{!2 }]Zje_*V׵ noP)Ě)1Q'TQ <7B4BٟVA'{NJ)sr%t ,SzBeh9D2k6"1l,*& m(0Vߟ[Q{¬R7U L^h=Vd1>?V sdzZc5ڏpP%Wq%!يcxSdFSp?&CtC vs M lL_@\rdXR/ lȓ1)7yuߌ+4(v4*/;&h'҃4Keu.r~<d-5tRTd% ϰ %Qm>R˂&S#a)[H4DV劾"Y0=ی3"@'mA0ZݽhM/qhE,7qL9ETHfL8Sx1P;l슼}b&Zv?./ 7܋KoGPJjgy;*(ZEAi\g{ -?9ۗ+z2SV@Bwz>Ӈ-=[Qϖgƛ-=[zlnG!Peː]l٧--G߲ek-W[r6H^龱S#m,^'=4bcӎ7i27Ν*uU#&?v -FXlhnݬlXChvEݾ1g)oOn-<=At $I,Xg͊ bYA j@q`d"$ CrIݾ1I.8)DϤ} Ykd7Rf<ݗ4wo.)mO  pXJ)ys+Jm+){|AŝBoLG آbeA(n<Į 7( CcrضbNĠmlBXow+"SȢ 2Ɉra9@Rm567e`a{[`miSA)`޽d&*05[6 j60J5[[V0:蚺Z!'f̈́v?*ߊi\A߆J9{ˋ#:5cDca.?*/&%&h5ٽwZ F}jS8dX7ePyR7CRe5{Hz0(구k%֫%Q{QL'eiMRdWtMlgy{.=h2вտ[ڧ7A׵6`ur.~ێ9)g^O1s/ާ?x\ȹfZ՜FX#~ٓ+]>[JHEYRP9AxMI9S̪z7&JJL5%8wěÁGP}78"Ir\7%~/gD D@q_EHlʸ(&l.. @KU,$"\#JH1BݾAoE@HXi-˥ay_Z ]oGW}Y"C!q.X-.H6n lYEʉ5"R3b"HLף]U }"2Voy3fh_OhÐ'o] P}f7f*[% Ǒ,[NrqO'?߼;X|o&po=$@vfhlfF?$7.Hf0Y[ؾĹ^% a5B>貘~bcASFW ^x٢l+'u^ًBfxqР7'4b9)C}חA{;)"` u2 Ę.0u1ews>Sצ~W+v^?ꄰOXV5:[?L^-q×Uݵ(FUqqʠ SfO-53t^Nta3çg+ڮDpvAVAIAKТF<3$!Y&NŌ^(c 1J)ckdnnNr݃*e ZsvVJeUTl}-wYQG12+Y+  Q $K!C,* 0>ޱ^: (q^ +aj>Ŭ1#yD#R2s>p΀21l$O@$4Np|cr рvbu oc!ۊDY5x֢.JMeDmJᓰTp^~|`[_6$ BFwM`nC95PC+w_x-!FD@ەR`!n%Z/\Rn3mZĢ+E-dV XwV[>r+tk$rf]ش<(ݮ@%`4Q VeV2r b8o`1i5 1粋jahtJHYweoC1XPGtQ(5V{12Oj- 0p¢Jx1 /Xeϐn:H>8ctDD!OSŴ9Lp[;0{wla`;iQ{-"Xn&0H.-~&!vGvhHLrt[=%F7MFM)CNƵSjo{[,FIZR6a>`:ІD^aG)!z$vݻ;a}cG"lp LHp3epؽxw8Dxj$aIV.C}˿.?_%Ӿ+vz4WB+}ʵgN;k'VJyF=Š@@dDd=p\6>$q2 ȇe6x,T2~Mg+P֍jSu$ DgOy,Rb9lΒI MH8Ԉ]ʐ0#h f`:{ec0]HQt㝍el۠ KHmG8װ#3%Q|$a\1T(+;kww RLdcEwhe8IsR(;RhA e(Ü` ?E!Аj)+%W ۨI4J !Rv"Vjn3!XԚݵ &X^`$kBpJ-],Ӆ43VXу;ӅWFk'^G̼]r(4,R8nWK! {~2wߚ\8߮p Bs8½oޒ}<-pu\̯n每~H?NW(Sc?'̝Ojr5E @9Nf'ut\]_/0HlF[S +51,Ąhc+'/Sݜ9@2Gc,,qo.T|ًzI d9@à>̋c2Ɖ0_mL`c}AEp̬];?Uf6os9;uLccyR(@PsǬu<nwQ`b0hPN8繶Y:Gc\#k*XSB,ecUo@Pu[XעjhYa%7f.S3&Bɕ %ގQ`8 +;yH$Lpizg`QgNP9fC?,~!KA$w8_~;c⁴Uӡ"V>6NF rT;l[?ݟjk:eJd&R+ 5*,.T$Xrġ: 4{DZ`,`z<;d3y‡2Y 4SFA yD. 0?hq9dVTe!xdAqe tfR2 '$YEQ0" M~̈KH%p h3iK%*{iM9X# ((N.#RrvOmi~QIeP (j@)@wPM-S\L#lDl`X;f7-YpAq@$# D ͖RL48"`QkTJ1|.t`4WF(m61SA,rbnjvjy51/LDu_Pxt$v9j^]Yw3B0E%{R`@^‰l~8 9xȒH<cΜk*|5~FWM](9g'{FwizZG:M/XG  Jvw?sO G3bsA6:`_rRN,5φHBX#ETdKc`y@<;CbnЗ,NfuWRKn݂^RL~kd\֊UV},WF~y爗Upp=.m_&Sl0a~ͷ~,'L&@QB5#ڨXFgcuZAI_Qdڧ<>o .@Ot^ )3\O}%xr$F Ѧ[ zU58EOPڃr<">eqcԴww;z-t["3$Yhx5U0(U cͰ7dQGe7 D>T:g~zrAm-xGʋKyAOK+L%BYidvk8[bGi@lPa!vz6Rtp2a%j/p9[u?`ynI>Mh7ʚyg \S,,Q1gd IL1q;.˄0e/1+JJt XC4%cHz!ҁ|IR"|Ĭ%[zhX[%]s5΁~;U#Z=R8&NcpYHV>)j̷Rom ژ1'hcN6fp}vYq@Ohpy#'*JILDF79JQT2:Hb1bsxy=D@=[6i^s2?MÚbGIp06`]gSbnB)-Xubޛ=4o]YWJ3[F3f\_K`I@`XݜQor[ bDJ$ mDJ.QǓkl%zxro2s% E*Ń(/GMNTT@3xE+ȕCFȨvc/]X&0vTuqbsܨ'J3f|&6ڟPBptGUZOʷ{d JܘP$=D46}A*Z<\C̐9*ÌeOs~Mn)i8U@290 T$UB%eUB6Ga5eSv%))(yFaJ3ƧBܳ DWVRO1X+o 43'hfNfY)YIMϽ.z, /"#Wp ?%N9N ,^}_ C߼v^!&#+: C%XPcVL}/?>+5Zq^n`pGWwKuvPQGpd7xkAx9>KfJ`V}u~ -&:}ʉ1CuKUg;/Uj+ƼYWǒjgׯL0p+jXZK+/s 6][s8+*3UCwSqvj9})65}IHAHh~nt7#rl5"xIc >-B׵rѰ or#9AP:fjVrv TGBƞx~Kb=[ZG# eh6f'ႭdP0F(X]@%e4+`>B=RҾ{z*>[LoMC×c˓PHJWYB}F z!\Oߣ\M?Z=;SowF=mgI}k2v[2zR LP#K+tMl/, ӝ L64"TTnv(y鸔x/c~g~Yѽ]UWH ؏ m_mC7x0xcY2mF9mV7ez@tՒY0U>Wu$4T*k 4"/Զ|1fz1]$Yqd4 ph Ic{@,@)μ1mcSwX;&OyX>HȹǏ۝32kGś?O;Mr>W6/6#5̑l pB@3ZMꞼtPU+iKa%;B^1C !TIʝ+ 4 QAn %x3DEK%R/`@ :* C?pĕ93yA17Ah]Grn@cYF~T2y8c%wG]ob)% N,"ԸJ`3s>F`rۻN7]=*NX2Ha1GRytNjL@/ټVK{'!֓:+f,}!z!c {SSy™u3lxKt.lPpy2N &^bWAG8rG|AEB tdhao&1Q[ VNɲ1}8b>v9d~tN1ncrԙa"xSnLPэa9e{]˜r9ЃHm`һLR) P͎7nO~ߛ{&ԄYWGm%-, #n8q6J$*>q!!)B'D1œC 釒m'b!ip/?|Ĉߢ?N/%eT݁N>nDEMhf>>ԌӨg0XU3o69=ֺi9PE6NSܘY2/2gzHjYk^ߴ7=T2K:r+ v?kx&H9c;]yN8 }= iq x{LZ 5;w6u63P%QǾ~"})] FtK?ύ870kI?~1+y%>H%wF漆d Snm*$jF=΢mH"*O[vԗ~ՊWx+A0e _S֬>1 =b0K3<$W."j/s$];G+#?H_5NJ1Mi[7C/>Bq '|P|Fه01BA.^&1}iYB$@zXSGxbNr#ҳykb(2+@9aՈкbS1Aw򑖘9Bğ0 aZ ibk5O/ i6E}C3y^ /OR.wZ갢`d.`nqꂳ<44i^~H3ѵ",H@"0pL獁DmnqF} Gq -fFﳜ ,R6 .۴q ]9!A\">x!N. FsnFRG0|0FIEm,Ҍctqv6X }NF`4bXbI{W (U`Hb$BRDp{\6ExSGFS⮙@ϫQKX*K@33n. ;9޷SՑ^ݼQiTmm4WOpQXpm%5U> tO2}~]-m6'&!(^}jxXk! ŝ}~2͉/7dRnύ@xSlw؟o7wx]xQ|VCq~D"K G{Aޠjf|h~N>ί~* Hr6OvC;uMcwWoN<몀/Xoi`RZ.ٻ,∀FbB<c/`Rq`۝h)wvS^ NnIFYUVN(o!gJݿ3JAYF+'~ofX?u@K4]1< ѽ?1ͥ7j&P8Do)QT~FK}K:[ ÷AgkYgUի,*hZҌrvQ ljX;J͑j W C5:SJM? z~/1GeYn\,o!YA[׭h+ܼ$ル/o8?>!ە[,Jc0i*yLR\:UnPFva*x >/)]ڃ#+vXUbJIoLU)@e,Lm]~y`gĀ*1\oyi_II<k{&;z\vs:)iUHR ct5MCnt=W s -A J km> yyN#xcnSC2iv݇Xzan~l4??Y-N `l5M t,qTeTc`3.sx(@+RK/snmQvda0RgajCempZٟTmZ7+-׌RE~A邴?v?i}3k6x_n)L]Oϥ67Sn4X4bQ8 $dTQBqK_H0U 0՟u ĩHVFccc5?cj5[76^.EKIf%R@^_DwVʳwPLS &^!]FX~KB9aeO楻Fh#UCqM6ZTVvTezvmA[J ,lAJ jHT/n7DgqoL}d쫈0򤈄9@磘'( FZ*{ l3@1p=ga2} VJ'.շ@̂PYZރV6<0a+Ofvtk"O((7@Mas&uWP\=#LVeh\vS "])u}FX*4U:'K:"kF:uNvB/I c!y m gv۵-P%m A.~&"G]Y0%6^-Gcv.]AA8%{D_!m39D]mNxBac+Tnߝ9aS,0$\Q J(X*B|w'컓*(S#$aG{Ob*lx[Zg?]oGWcɨp쌱ذ !)rC2I}${ ^}]UjS̊}= 1+ǟo`ʺB2/Y R#έY1S~rv.1(wQӂ{?z(߹q珧Mϭ>==yJu#ʤvZkQK휩w2ڰxuGWFq:f v2¬9KJ %vDh93t ]_̌K+Se5lӨQiUxo9V-_NvFM齏r2hͿ sowhhfYWf5@ [OWksPW$yZ.Bnr}׾ԨԹ/QL>JżQqTA#^"yR68rDe2"ڲFa8{WJB 5t7o6Ww =]ZkK#//gg45^et|&|wq$NZ0)1ߎ}opݸt q0>sUC z=ʘ%8MQēp</>9) $pmUKAEiacԑfԇ/~8yxt>\i%x蛏+ V Hڪa]ipxz1GwJFeɛ_!7Dؔ ./7G񋟞>䟏^6 ;^v]LWѕoukgiUwe[w}*p EMz)֍p!3]uʍi)eg4_@:pwM׵|K*w/$Ѹ|ٿh Wϊn5#pZt*5뺲Bu6J597fn|zܹ*RI3|;Mn=\0wV~}㲴K})0sy9Q%zbؙ.aF`n.4B>=|}x8-?ak@h STU޸&{z?>ͲR8wWdnn=>~ssuKb 1;<+EI{cYiR8 7܍=nQszoЗiи)6*0HxL61WS_ `aL#Kh:+6ؙA_1ی >3@g3s22HID;%S*NK֧1Mgъؽs.롮PZ(IN,zV9UYP"iKmg,w֫ TiP'm|֫]ymޚٕ׮⮼`V:~sE+hK\p+調 Za< ݟjwojkO ]ekUνlMv[V$6HS}(]=0]5m]_̷۔.'c[WQ6 Mnƛwnۭq%4c[W+lٱkgYiJ3|c#XQs]Pd, I LLJDi([NTv޿zk#n+9~$ipy#'8Pޮ/LdD{^E%jW\+!zk @x-Q3^M±:h*zAA d;#9sT oNhYΤ >$t\{c,CY>!/+Ɠ1nKuʂ]:Ά9Ibb1AH¨78K*2!8omae 04R2fL %ҟ&NLp3Npܠ9ڢB(Q@p Y\oNHoI?"S%wH6i[2C_ҕte'%]IIWv2lV5 ϬSJŝ>e!#'{-g>GG'=TZMH5nڬK L*?q9{lֵ;1)^vlS]{1WL9 )mRڼ4ҮBJ.ݡ> ;!qjFmvjU@U=]$5dk"sz5އp9ZYaDKB*V.efuκ/43ulfDe4(2 4QB5e1׍]jƍ&$ J AE撆?DcfJ> 7.0*Ynbt#^>qۍ7b&U^UXS2%UR9ؔN)L4YtPh-0xxfY2IЛ<O=$@`RD un,?9j}ЁtK1ssNY*ڪg+ML6JGi35JVДfG"BJ'V;$PLoDLvZ6m1C =l6XW5UYzMUk6`m± X&"=Jf D充ˌT5R1HOs"`YDJ&J9)x0nd:MZRO#RG[!\9j}"}c7nڦpZ8q2&*vXq+]DVf )emO1gǒʰkgnWPNfIl=nK6} c> Fi^hK4 RC^-IRX`͠3B~A]) l.;Ǭ4 %,TKw1YkNR"JXa H3#M10(p˴ eΰX RvflRH9PQcϦ2Ƴ " j^Y>ٱ1a;SV7kVa˾|5X[ LH];޻dRP+]EC-Qa(T`ůJ%\ x.6@NIb͖Ѻ|g"nlL84v?X/Z&d_Utca -b@eRW6IqjRsGZ"fŽL-LUS/ _z/Sa>,jXORn1:r&go)[%H%O(3Mb#k3PRH*x3 4*Rh X8&èHI$LR`[.; 3qP#&C@TT%hZLb Ƨ`a꽆ɅKaStXi2؏krܔ 0ZHx] P`<+JO3h*se[|zj鲀4Cvq&i>LFh~.kLKċfEvE ~⪆H@50Q~=j\5gr=.ϽL`WVʎd()P~8d=%Cop~J$Po>8{J:ILk!xtU45PTsQHUfW͙ec+#Q5 #xd%{{Ѿ)K+-\g-J\˚4@T\d:}D^X/e:#,rӌ %L7EwYKA<ϵ<˖J@@ g[ެn6뵩z;ƌ/%7M5PadtV`QD@*pXyLS`J,Jy8O%yeZdT|E+ LX`*[({FIEq)0LL$y$2M`4 rG~`|yD-CKE>=tAXΕm+E,8\X>GGRiPLkk+ČvA0Wi,%h~SiJá__*[1ʧwc_ sv?ɃAqvXx1-_/mxYpM~OZ/?"pYQ0x7k`058 4A&$4Eb,x\!@\>Ґ0\ Ula HLvצ9`bɔX6`X<(]*z]moIr+ 2tw8؛C,nq}B鶵%/IOPe]æDrz*mCE/*%$4gF)>Pr6-K@15%[1AL\Ej$iU# R,3x I@hՁ]+\U*4:NzSiIN;WD%)J_Ҹ .T̈ h{!IVbॉ*#+@2SQk-ޕiFĈRwedj)6'j EZ1A+ &%*+/yaF?SV l)L 6GZ>R2fX/+dч9U'ҽ-Ai䀒*a.J2 XISϔ&uK~*e5H%9r%>yS$修Dj 㔼 U,sǣ8WK^o^]ql?ԏPd(AKa[*PI2R ڷlM'Z?k&𾅐!=Io!#R.3M>:7-@˕$&MzmpmPKep5w6V)eqW3-i3@力yN9䝶&7Hsc0WR"yCj=&`-=Aܬ4ZR "BlWi{D\שg༑ }ƅ`\=$q.Yn2o_}x:e&c>^o ] )}nny/ODMQmw4qusgn=إmc,ʡ=jUYL&[ENʁtʋ[tLT=h+Df@v ت(xтBѮ֋%\U / }m I:A8C0wL=CeEH51ޡWB9‘U`3A@^V쁜Lhږ _*<)<9Ki} Oη>dY̎Tȫ\da !ǣ7&Y^4svM>$람du(Z2MEӁOȴB}d|Cp\`&9{S[Sh0TýQjz< DX:;~HwQ}]+yklhY\q8Ʉל,=eI=`a5DeE*J [˒R&p^uS}bٻ~(ͮ>ޤ|u()Ζu/4rS{d’<X{ ;Ɏ=ŠƠC"M8!xБucm17= !`%D[+R<1~#QLQImlxqäşu {nU!^SqsK?_Kr6s'jGE~u)J)~B"ݝdH}FβǽBD嶘Q$Oaz{$O鹬#a:ūm,iFEk_rU(et:4VxsP6ҹ97ӹY2ud]Ց _tuImW?}?)߿4M6%3*OIįrl^Vyi^YݧnTi' >WX:3fS3Ιۥ.*:[k~W=H9o'x byHpXbzmZliGf3ve{u?(1BQx)u() 8,\L颒βbyN4XsV֌d 5(GZ00  jpK 8\6c S囓,l3J1Թ&5k%' `l4xfІߎ%-b:l|,L ڎ;lv7 L79ו 046wWvhΚgðlݕ*eΙl>4Z4^]=msvc풞t"Fv m,8&~Ijdli3a` μXfᄻ*XRvb*3K^(YQ sN:JӜ27|jO8מ\OdIGY, N hJ3nJOb$ kYEG4 Fcș'꣝>$|t&uض}i^F"duIǞ ںVI^8E7 C_kCZ9!=& ˶޵`Dcܩ0ךs%&iְF8Q j[ZoEl:C4OsG.7`y@C:i4ʥt02)y#2Qo_zHoj|IGƬ!Yzd@1ZT:NeJ \9 E/ʒIFbm*vx;[ĜXc8 YsG`oĊ_?:Ygqh=NQDgK+P#.ų; ӺNbQB0%Q 1MLYz7JY5y蠃=|*icB7I= Ri/Rc|xQړzpiGV!rDyYH GR{2%^13"gͿ 4:y_dm&.jmGzW>78i䓮2 [!SSRU?81 8K™7zm"~2…V)wԫPߍ=/BiU;FCuc-9kt;<?iLG.ӑty)U쬈{FVZ;>f*lI<0!&d7%}o3h:_8?Le.@S HVQxӎAi6JF]ycT%ee²Ÿzqߛ,V(̐X5 _*Uf%(Ti|A ,ҘSM>ZZ0D'Qy{-Y0SMNVHJ)I2a},vr PB%W5IXUEɂ,zZ&kv4$E+ٗ6 (N2\9E+\y/Ā|q T2z9Gz}g$ierN셞ᆍv|e\F/Az}]>)mU߫[NHTh6T{X -P-v&dbZ7L *r]'p-h]&z=&\n ֋F[{vZܗ6NP0`i7+ Z G;H\W\,1vo(ii]CEҌh6bhYm(Q`!gsn {:vw)O߶ "z .6k;k309@g_WS*&Z}~}VkwסLaOl:L޾zr~B)ͯrY!|IS:~Yr pyrȥAQ0 ֢QWe@ru 2WE%s, VdHv+v5w[RH|v7FdңJa?88W6:4<et?ыɁ,)dңr{XЎYApNhD ROOeU%iUN>-K WУxD(74fyEHۢQSe D!ˋco?ﴆ[JpJ-O(_S<)kW%2%&HfIܴzr냕{$+$a[DŽ-dM[+l|&(uTm,dPB(A6|41BZ!6^c r7#&y×YԾZgx}'ysJ&2&?\}yL *j~*=87zoF>w=9>=>|#MZ޹r3OqP?iά}$+6SJmT^۪ۛ4+~(Wί{GXo8ꛏ NY݋ Y8 < 5tKb+( `PƒQ)T Q[A!_쀞aLXY.u#K'z3:K&J6h,!R]Hq'6|}iYSeȲ ӉBZn%4wZM;VBǶKbwA!at$8۷@&2e`".\ߺ*ylT=:#_s c7KG"*[;ꥸ+Z@>OR.?j.N'+#cUwRvCg*5quj3VBH2F gdJ!+KCEYX(|jx^pa}Ju1:b_jզtU _T2zU) SsVS>!pSL k/o}0慏r4Wq@GF,Lo~TUl이Ez~=++<Rշ_y 8Wg6lsb6k2qBM7H9ڎٝ\{ݷ^H{p?; G*LnǪO;{HPΝԍVCmž TRnTIz{ 42Кa>{ NL׼4BjG:h0Qcٻ6r$Wõ̗[|؛b3iI&زW߯ؒ^,RL`{Vw)XU]|r"T63&UyMa@RLR]z"UY"3BM*dOƋQ^ @dA&N" P"X-0"l&@DsvNĭMsS$hwA%4U.3x#;XQ)f[N Py~N:5čjw'*DlF>(;<qV_3F#wM5ap=T9l-.NNR\L1 ;QpE2ЎM,"(U` a oLL,P50n0Q4Yp5 ₋- Tr~PPEߘ,V ,PAAK٬ג`It>n|}r78I,(uGw_&q^ܡm?jbVLJrB'-]H9IRFi1꺗%UAZP⦼Z+ W3shw+6Lwt(Vw&QTVa^.VP~A/"c i\,Dp} k 2 9wܚi̖"J YPO'ꦴWY,'< #;\nIUѱiQ!fyc+婀=dK3KY UO5,ge$W+wU[J"Fz~IBU*NF7_Ǒ+QqlF/v@qWP>|Lt=6Ӡ|m!YDdjY0­MO"pìxSs;!bXL2Wp48+JFJ[ݸn v9:*VWE~jk5MR>5̮'͹ޣias1ʷ*^wW(Qϋz1__8o3|4R}g{8Hz.=skk9L&w>4rxf+;-Cqz~š*\#tRFâZXUp!̺; t,Щ>^;u&L 1ڄxԅ֎pKa#< &w8lGB9 6R +!hN"DǨCwDϡ,%c"P2"R==M(5E&b pэ+t?ל3mVc<$qj"T{=_Ilߓ/R3e% %_^$ g8o ?Wyy;YS/Fp+1E#Cf!RJem W'3}AܙhIcI|ݾDk4!%)k,E,C-HCHU 131:N1{47 ڗ{EzкBű-jv`J7gvNtIvF`N)4DT-7eQ![;*s .)6v"YДݓl uz(??dxŎga\;b:FwG A@代LZVt`:`gnLu!*?l_oCYKn^̎%M)Ӭs:]cW&5[Pn=^BHIhSk õ?G:U#Sr?ީ\]~ =I/i"""c82-.ߩ?s_@I@GbRq;HfZNgEh9]ځC)k N1qg{^{ g N "H}=0$8'۫ͿO4  4WԻ߶:S^=3|usD. 9i!Iɕ֊) ZsB9tLꋃD&o#)zF39 A#,S^`+F? 4,BBjءR9XjAPJ*U\\XjN,;Emt=uĈJS2Fe22cu~_x Geǫ`1Bzv99 G)|{q.&rd.Ͼh1xY/fysLP \VN怔8o$GÎ [JbKo 2SE6ᎆ)o9åv>{FR 9*.4ri‘/C.SeCZoaǍE:NLTiOcDN9J)un=a;0xԔ"ZF QpU8ȟx~ gR BzE@7S%ѕ<7)*U9{{[D!tA08M>ޛYܜN Ȳa:B@r9QЛa|H6;5xT%4-:Np$CqFv2VI$ X7&cKr'Y)@B3V:{ŏcNO%[; 9cxY'ɕ?{/a5)Yi֧o^sr_SWn_y{ƣˊ*}7YFySw_ֻ\?_|d&U aJj<;PIkVм:*zqU͝AC7/n?־%i:|7ǣUp+r/n!k(k]3\ͬS xpjh^rC_4hsB|Eh$^rY)e*I5w(} 3Zu5zzrszi8(W 1rgvG]k? M$~M~OEjO'V?EF2b\4`gSY|C)>4kQ9Zz@IÇSëUßYڼ&(A-ȭ%=;Y%g'W:Qtݟtp{nUzZM O,!˧F1w-7D_ ) 2R C-M9I =Hⴷ[tQV1 ڦj$.ϷtnT# I3rF&ݴ hZ6F~Dmr w$5r@ |c<*hssv·,"Yo4Uk:'x3rJM2^< #UdX9xnf3,~.9r.eXYJ8d8=t8uK`LIZR9 ( O&twiF^ 6t{~DÃ:t l*չ!r!:m˗'h(A lXP ^1zCXRLhb;1gfL,'V&ϪK-5: c -f7[˳ZB)0CsesTA@L HXrI0Ló3d~e}!,%W[#^U9C΀R*ѧsNiQ{Ay))ZBW2ptzGn00NcHႣ鄫-תp 'LI-7.eSI,T*̣:nidKrFRYvc{0r;L]ZkkZWALu.U:뭎 Nxš$NH4ZueJbhGI[Fn#b̗M_pH>aA؉؞bm[ܔ(3$"TXU <8[#2ή.2,Co+!Rx U L0 ZA PZT7tCPhW*6*,L2C<: @Get-XJ﫚~@KM{>! S!+7h8YѣJVC/ҟ~Z"c.VعhYT$;+dٚ0 R /@-gVu(eס EK_7_/uI@ 5ӳ$:fDM 2CH}ǂE/$0V@*NP'exi)e4r SMx?InI%}'Xr2zB7Qcd3.4_7[ cjĦx‡ftr> a!Vk!ԣ>T|gE,]h zVх^NS X܊"l0*vHՠ 1(k\ZlƗRW6,X{A4G. `B٭a@- [/x kǽt;f26+vRA.@eUzőx,ˆ&5`M-kް!"} H8IRNJV+EJ@Vз=i9l%k0A͘\VfWǖGTIãj{#ԆImkj׋Ռ;M2Wv (%}Vl;w2|#$oOL˔~)D f Q~, 4tQZDzb9j&[LT1qKC aIݒ\],tA*V<z%-Hm(ܨcPLpL V:B"T.Cal(W)S*WAy|WC>D9DVJ\j[>@s-yzXc7G:te(*  c*e ݈*4E$ڥ]5h2M&ŚD{"ы{' x G&; 8k9}\o:_!8o>$LwzEʱUx>^_;b=(|rgo^|v'd=I$TRaY8-6>KIf&6A8Aۗ,İܝ ř׹hF974 ,O}S4]<ߋS`)a7r6cS[=/'iJII W&"[!+S!"b oy]PzTHD0a+NwFY?g=s*uUׅd.}Tޖ3V[ϱA C0s;!,B ]v!{|Ct$I1(@h욯TN%"3 rg6)v&k|vy?mߩyz9/I2ԠaJ}d/sF^ʗ{#G:ibّtqQ҃oS>PBj>oheuF- k-E܈.fϷ_1m5;ZU|LyY=WM?>Y4&˞dӘ,{L]Jcfvƛ c᜹RbiXڗbAhj6={ϯ+BfzV5Z^NUd2 tl?ݮ<=~9=;:ٓiÕPj)yLmF3i|;rY,djQ$4AǞif15w ^9/k4A]'3uDYL;pVV zUd7պߡLWSsܫ)hi÷{\Lp;5 n;u@k3v_gסּ^i- 2wZX{ЀlcN:xQtCiC#uJ}JiL8)1㴙Tt`.hŸ*,ds<}ɔz)Nږ6 ALVr'Rr'|-j8z+Ylp"$BbQ\[JLdJ7f4@m\`B+haU-YgQV B_eL%*U0越Rw,sM,,J9O=\R!Byt롕enﶛٻٜbn>@2ލ"J\j)4{>W\TN$ZWץbV(5RĈ1Y̍(.W9̂jcƃ.))6J x )cfe(x҅5b N>P:R(s/r\4St8WHb:̤K/ --AKM:ET.$jwnu%dH%)q&u-at8͗&{:p)ԒۥdC;R;1܌~ [JÔS&W< RzcChqAZgJK: O44MERruT#?伔F*ե̉T \M[Pu KyE|K|=e1>_[}QZ(WmK-2*2g,V1zwwZ b:zcDi߯XyE.[\ypԖ@Zv9=8G/s8x Moy:9`C1$Z"gtr4hehA`L&roNzRŪ6í02o_@+/) n${dOci =m6߾HS2S-EIBg4[r W9%]Xi#߾r'zʝ|8!а[*nŠ~!$ÀFs* 2e,O=QK | tgϋ I[RG54dxLGO=&^nI/4c{@ݏGΔћqo82~Rz+\Z >E =Z;Vi$*oImYque\V%'2 h.J$$g{dF磈w+;<`ť6pʮ#.Y9|KپqxIC>ѝ}`3(yrJz~9ُp1crOYg6,,;!vu2$ J:"WZV+ɞyYD =U:>; ;]g#Y-2լQEkOr Ym}?;yt$ K+Gh{: >jWFڨ59e3lZ,I WUy0p)HK 0~~+b 绲i o~y#L0}ӭNE5&m__2dHWev>\I zq㻋l^ksK\jy $r Wy컮%㪹/~&֙=rvNNܕ{Ql<ҽh=3I\PGʏS?;B.pS^f'Qnl\! .C%ː?_۫տ߼ QnH߬b~疫Y(XBA0PU&W=;õ/c#=(*Uzaˮ*@5IݝNz t-k[ Lp^ l0b0tBns'S m6pNlLfqN TW9)8e3rR0LP쇀bi(vfY=š.p)/YIa!EC E,ƊK& 32([ zLIK4e.cs5BsSB7br:cAF~FHI=@讚+Y×P'xs_oK^3OSzh!P , L_ovIXh`p/}Rc+Ӹ4&wb5RL׮s\CB?5c9HXEQh-FX&3Z#$wyLRYԭa-Er/cWo83U|7&/ veg~g]ڞ7Sfƚ9@#=3]"yyv!ۢc{SрI:&iZڊI`HiqЭA7jBR!2x$TO9N*܍˘߫`29coZ@2EF.aۗ1?EF̧^|u;bO1ǽrr6)z7zپ()ʏ`լjD/pUGӘ)`ä(>6`2pDRxs+Aeet&h)0ԚAk'A^9/k4+ ei6y){w9\e^؎f0v~IӲx6H^ZB`d .6.Ё^XB yS} 8j[@j1jR]`@>Wt} E'eemN%gًCќ6XV`!5 ᜹RbiXڗbYDSә*tL#&=ٴ9AZtzgp߬^u/*~:x}uHX4oF i*+ kbg/ wd8yu]\'1$ '빇o^B)6$ ^V4_J=<Q&p zScK0T0$e쯚=H ‡7ѳ}󿁑}z/ izcH!֙ u?`RS#{wcmnڶH\8 E#iU~ݴ1VkQuebo^{nR> _ gMڒS{2m "rt">un>]ܾy隐HQqtKDt+YrӿU_G?=x =cU" ձD]o.KFQ4W*:n3Y^}:.;~iRDeÍٮ ]ImdXAf1 :4viXMhfT|KT0h}m݋Q\MM4A_|5(?;rۖflljfd,-ա^ rAd.U`y)>;FD'D}$*Xb]C@ jg62$dc|%"sۃc@UL슘XybU q7b= Xhf1 <ɊnF]Ȓ 4Eɰ7}ebxK0s Ic;A_`1߃}J)8ƌ؟c"1z 2s[Ϟh̆Dfţ-[?1gD9Ck8) ا Xgo-ܐPqaZc ̭J䦿h~=ERCE]w kbLԮ*K;Gwq4^Di㻡LH kv͙P=80A"GˁİB1vU|h|łHVjCo./;c+8x٥Fs:~ӬDᇰuD@ى/#4[FӊK1[z" }7Xa*=M.D&X H8{㝧@^E!̉AsB+Cp"+#kA@FVv8@;Za kdg!@ cDfdt0KӕLEVJIcˆ[-bHE[$a$9Qʽ.[N`KA&5^oE~40"2QZA1pEB$:FP D, A=! IGUnTS≠8~Nt~Ww"h+}o)vMUwK+E\ֺZ%J)r/ߜQF;uD:^;3k:n:'4Or5f'| g՟n^o_l.8H;CyUsYݖ*D-F@KҰv6[qiJm jG$T=#r{;l<!@}߱>^?vWQ\ Ƽ պ3[}`^jP RngN'X5XI3AZHP"Fr6"Ẹ sٞϤXȊ.L2;>w\+tsHdٱ ) IQWIdv-ÏJ H`BWT͇˳WgU ᅴzo~=c2N //kY-K=%HJ 9ZlQ쌵#5d1^" k5 ub 6X dh"i%L̬C=bab. XgbG6p{3R2}VI[﹍NnUr'ӦOIOm?۾`\kbI.B+8Qa_F&?0!jV^ 3tAIMuYqpӚR5&(<YKC7U@5@jCOu;{£8Z6FVz.#bȇ5m*A²/63r ^'YXqhZĽ'YicG CN΢;yaQ*Gኳa5N X!ץɭTNyV$ L>j4 ^Du1A䁾2/х,$4R4("'J9 ȆrvFQ)"I hҘ 1 %>l2||,GVk*SF iYnfұwyTk4҅dwn[6_ %{1;hoo3\bX0VuJ:D{(QobItwׄcNš2S`%͍CX`ZX'Ri}8؅lxv%0@5xgkll(.cɢ@EDdJ#e!8X팫g[$`m:Y}ZU=S.~La_z bfMINRQ'1s$kĽ~ClѼ.xc? trLx*qF^Y!%w@2(!xOYuW={Y3`V!Rʚލ$p;.FN  Q UDCD$ D hͼrJNJߙr8&2T1 (!exIdXiIsmn΋5m| ;FG#i@"Um7 '2țMH ޗj>]V|dV@,f 3{V8߻Gecw 4%(iYkYnҫ4v y{8+$&'3(xѨӧ91Y1Rpz * -3#{_tJa_u^uGN|w=EOu]gܻv& 5 73!QjTyuEz >1c}2q8:z{WL/NCuګoϾ -Mo,uoͼ <$sod^ȍϩ?܉M s)q|WOpx=گF>Fw9{Hkֻz~F'ˏ R?Ce oG쇮kW(ͬG_uoI_eVN;W~a!{=u/o+'Uw>TlX뎻V}wYEu_ˏ7&`pۿr>t&韎~H$颓~ )!3G?oW&fO{ݼ9 g7ꔝ~4eű=|hs,E Ujk&V7+gD.G*Oz,~vllǪ72VM߿{5 O {6V zBe:'vtkܚavhqScuiGVl-Z.:s[o87{Bb8 F n3i;a>Jۭ2\);;8'\"3.yg]La"6Z'5-gRju?<[D~|_.!wóG*2Qp=aY;stk8c{ 5'ݭj!sQ1@XڳIcLJ%yDd3T{ ~. y6 y员SH݅(t2*?%E!wbfC2 g֝iQ3'@+]:f6 dbk3q|2w\tf:A'A@A#QSX&R.i+ͥnwm=wivp,lmzZm;Z&q~SymƗ&hjLQƷٶsm;wܶsm;waHD5Pcc ASbAP+'({|m(xAߧ1;q':d^ǝdOeC\ PA(;6s\< 5HIIԡbAWE(PQm"evůs<0)ϵpibE,dA Iz$ԮFK60+F|I|>HUP\[^Q;fp@5҈Z Ih׫ت;L(&a4!4vfJq(!X27">9|?ַ!ЀjSY0!`v’haԜhc,TV0c#j JHBYU!TpQ!58$4ƪCf{,0 i\ͧnL[{*ٻ޶r$W fgVC#6ۃ;3xM4mˎ%'߷x$Ǻ<:Iq,b]XdU BAs -v( %Ρl 4Q\"DƠn;3=nm]V!p+wCYm[ZTV;(#WH ⥏M@APq9B$srhm$*f De BRD g0ǜf¿.Ord݃TZY >W!O?uvH5B(;]zyɛ9~2.s#&DŽFooе/VTy t\}y]}՟MggVr;a<-cTт+B@?<=<ZO(3+gNҩ` dX$@6BNNƹo0D&6+nZMm*$!^g(d@DQ`wUg]:3Ž~H++-m졃GeDh1QephQN(c>1m!04 (ʼVx UśIt&i]F-p~jJE1+IB[ҠM(=yA-v$AG٬5Z8WcaQG^Oz9ܟsӪ)|9'a3$:M(-c1qp6(!A[P0Mi/oWN8!ux"5~-": 톮D ]fqq Oo5*RtocVHK&A$q(`&چ$';ߦi1/n8&(A8E;`qc*@a 8#VA0IAu}J2-Z)0TpryA$`5b&X}.P$2^[N`1.eqt4!1fDȮ1NƀTQ \D--:bef!`G8QJQ+VӐ 488ug0$ Zfhâcm: Ӎ644Z,`u[/0_gbr1lv0l#^ZLy(lX{ֳk19E4T/>صa'1F!'έgbr>;? Fӯl=6\~/@ފ>v:? ECzֳr x=ldvCoXi\DTOהk0xMpȠ`0)j$Prl ?Pּ@Sq@xk@5o Tm6|KfCImwQy[ 9B.B{7o#y}v2׿_vcۜ`_Ӎ,Hxx1Kp |8g~/S3_vmae[5>bg>+Әe9V$,j|$왠 3JD fY3+oY#iU[A5$=g+8U{嗽O &d}4Pk5֚eWZQkM!f=rG0z@~{GHߒ+2,[.AcE߈*aI:` p}b.[oFW55%Pչ4@0|4D> ށTg(!s\+ m4y@Ct>UEb"hf$w[* 2#H)3p+98,tE4ro>FFWZ#*uJ)Y=-DFzN c(> ^p2q8cz_@C@?"079'Kie)\.#3⣒Jp' \(wNDޔay9l_$GSX\Fa9yC ZKUG 6S |0U*H | q𾪜bU@%2K -S+vln@̀d"(aK]  K-XM,Aў3ڂب/(DT,V3pN؆m(;R6SIi &#e50`WPӜMpi¸)ʅޕ۳H"jLfK;2 ʈƿ3rۏv|[q9/kD&5!{^5{qK{3mPhLG{ף0=n:x{.Ѿp*UȔB {xW>wZS=Eޒ cyg;8<)Y m'/q~{XQli_k(F>đ|=+&"宓 ܴzr]עQ$/G zDea}2YN]Q|RfuZ{i5r[vBp%kǡ*8LЏ_cw죴oTWKTt>$HNqg"VbGƠ!D͸AsBH$xЀ (1] 񜴁@wxz;sOk[ptag!L}m 2͔.g53)M,[,"0.H9#BG|XB^jZgMAI&5ohoyq;[vp뺣Ly]GPM:3h2iV%Jyc4aLeJ $2=BFd)츤ZE5FrYOU=3PG%$5(d41AOZ5joŕh93R`}"(zc<%B.b8pYo,&)4=B22r]g;)ƌhn Z恭"& - l TkQg ~J׈Wg E&0)-TN BCi!j@nHNGGHԎ Rk=[gm 6@C6  ACͭڑ׵ 9HЁSvn=o{G_yQjGv@~/ggZ \6]|FP?Ҋ#|d}{9V^h8'\dqxZkLW_?eY ()D!0A`葡N@K@szjf 42jd?..f wxQ.kV#)(g{;qnaء=&.1#GW[ -ĵn[dÕ_Y^)3DWeDY2"({vf'-4[HtKB {hs=|hj)F 5)dW4{GXdx7L פLO)qmpfBH0=[击J<܍%Ѳh#P⽁"pw!=F5A@ A6T #e +דEXxQC.ۑi|#kp[fy5I)^9)EWjR&pc@HPu1jdxЌC k}F%q9gr%H$=9? G?]-q42jٵo0t*IMOX+wlpkO]tkv1dȈ?vRuЉ+r?=Ghbh`΅ sD \GԆ)xoNi#5ٻ޶mWS2%@(}R4>0HVkKDV%٦d3X,Ʊiҷf֬jB.Z?~ dڬSctˎ-eC\jsjj Np'[P6t 9[bpK ՗uժy5XHI[_i6O LRuX>[ܤE~Z`zl2+cPԏUvm׫o4]Ur-ertسr '<76f֌LysVg߯AݳJ%V&<ߢTp#馱}nM1p:Mר#:t(MJ]CKU!'ESCLMPB,bף:3wQ${*xt׸` ܂sZ$+JSbt6mE)"jҽ|N"WȽQfMgPsٺ88]JZ.J.\xS8ӕ|5<_pԹW Fi|:i}딚Yaxs6%R cE?(. ɛ}9 &ITRfCp̋n@iv-/O ]d$"Lhy)C%Gn@\ϾhdbG` ʈʢRub*^]%Q],t42J '8Hfq0Kddۻu7,}Ge:zu h"_'C^;\N0-s7[IU1pAdap=k~War8^{B PQ7dA9 7:"R{m̞ct'đ|_y~z2 #NGC9T+q9_Lg(cΤds!GActO6lK[EZ~L?7s0ՍC@f4 MHXz`q؜wӾBMuERxkJc=t:3UTg/ oBFJI; @;@/u#9! !6;cÝ Zb]u>' x2#};p'q,ص;*a$SCmOl񛧹@v7ivҒ?&:pZn&V/>}F>Ig>+=?ȕhSt>jO/T?}UI)[F "H{0-f<>:GaÅpν 1D>T !)#<_ M';f(L>VCl#.'vSV[8G1GiH!j-)ɘD팪fwy?5AQUR6Okx hzP ]Ηyt/5GY a)h*A<8QD4UXOLZ8<,ƻk 3~;;b2HX=x58xDM52Ӆ?֯,#L )+s0+9dؐ]sĐG$k'.<Çn]} C[b{geߡN]r:"k lw'][|ŚhOŠIHi<{^oM|LA*n@T\^>lOK)ȟTtۖJU׭ו7WqjS"*0V݀jP6"asu>,-p/YWd\2TͮƝ(NhIIiVKLϲTS2njIŐViB9#ӊ.Gp#XaŚ 6>)BHpb4 ^a<&O{%,/ W4[ rPՖcD~z9/]ҁtgGmc ñ7,OlV1DuK{e빇a ?'t?OZ>,@hf3Q99wQw -k$qw,ZwYmT\‚)M{0"yӜgo) \7?:*<C["F 9vrӶTS VO3Ҳz, ?\GVtPexT BN[>{6W 2(-P)GS8mƈAu@Hs.CȠow?<J֗T2Uay1QF{Oj~ 8."$U!COqޢ 6D_4zjSl*$f_;RK{1WqT6iC`Lp ^C̠CD8X̘5!hA$.{vNj] cq0C4b'_bdӄWf%i{={Q́VIMj.t{3qqLYL-{Y }4L9kq; "K=<1{۲SH]*g-^ maW5__H]hڒ`lqd:jOLmiZ߽z4Hc'؋4 a~Bqix(Xox-un<0l1AI8Ga0F'_";9?A1߉c |I9NL+`U%ИD"\i*2՛Z-N> `114 ]\zމ呿|f~Q|evZl E+pWT2os (xt=lЦHkճ F(81F&xu//5"ߪ^~~w.\p%:՚cXx_ؕtXB Nrq$'s),.7oUm8M%n3LRI<Kf9J(I#p8Rx,r"' W?& =tf <;u1iЫɇ65*EyQr^ԨkTʎ7D-ܩ)許2$IepMVIPQb`{RK=*bΎ 5r5]- ҏubrODqqy@7 ѷypbՆp*VbCOX _iQZq~xj,3/]rÅ'W0P5_9OVDѠ͈}vtS.o(9=ݤܠqΌw3vz `; ݚ>ao걳МGiX!AywUX5y7+o5F`]Fo!H tpLBn!7/?Py z76]3l?[7;x=݋O.Udru{1=<'85n%bǩ&V|Xʞix%6?nKu;`#О1i[idc3mg.'0G=f̍y7y "xm;1vz!}^"UPi]7껱-{MlnVE[i_0_4˞pQ|APG}Pp#:F3W# KFIO3r~wLc *h_7k+}f|p7Ė㴪(ˬ@'q3 %( ,g3syI-<̓) C<G1򘹜kmC\R>72g!O}21B\wF/9 Nʘ~Nî&'q#`vD Fyba{)˙qL6E}!hNVc"T3 @.`=ZH#c{a,Ñ{3aF~atĊ>7`/̠XuskRZEb\ip cfT3b%0q{Mkȯ=$8P'j"LGT*5"> qSl2 C#CMB Z AO-1Pr8ͩt%(^c/PE:m "hF1a"9ǡ/ 2VJ ZaZ "v11x4\iZ 9 t&r4KTmv G 2:FBq\MACl t郟X%sFr!$J$Zrq,KHXG kWo.ؒj[Aޕ#X@ͥQGa Uڸ'bOӉ] s[%RD2"P},9AQ(B.YLy,F )Aĉxt%QH? #f(&iuV1%Q &Qq$;79gxn;0?UYkަs[/ayvOJ6]0L8tSm\z[,p)g9[pVeۀmR-wK x_^kX|JzsSDhMMrɪ~XN>ZϦ|_ȩ^AڎjҳvXtMݭU^7``h<%ͻoPNZD k?Unf~ة<.,~L~NO|׳ ɨc/hGVYG \6`4y5G]$PtE06>L+ӻk&2mB5cPP'RZfTw.É5S1Ӂ R\I>J$ߐ?! bsAl6qU13c̦ThY!Kf۾І}Gգ(1 } WX.;%>㒳f+00Tx c/(b&HPEH(Cscokl0P(_.U%ߊ⤖; [QDZ;z{!13wlW %fϳ&S|w&珧]6>~&,1q bBx}\EY$߽@L "s"]ܓ 3xY~u9),V+.x,#LD/n;a&I%(Ir~McTԆ=[\ʲ$6:=4pp.1ܤ&ȶWQKxXWR[Bn@[]7f(u$57Az 3]VU9 bA"0FHKJqX8&A,f`*88J2ygxL8+L!EO|I-0O|90GbO4GN-^16IH~F ҝϮFz|rmE2aJkr sqL_$ TDӁ-B  WRs Pde>ODf&Ȭ խcRFr6Q{0. KRRyV9[!|TL/asq&MrGiώuߎ1ٞ$/Ajf}-v7q%HDS-[RH;ٿo}Es @hi8=$nP>8A>>k߂Q*?voz25}JT:`EgPFv|ᎏC텇[_*}K^ׯO?zqqygjWӏA 㤎.'{..tӿM1̗F׍FFϊo߸;\xIēr_򟓷[=Ey_wqRkUu ?[OwwAR*7@5R&\/\s E^ }glwm5YrWO7w?q?F:nLӫ/$TeXj)3:#G^7bއ۵6Zɓ@1蔩Z~tekm}+Uj?]x8;~h6 4?=F0^|<~o]o/6 c׍]C4ݺj_ @~zIi6|y/#ȓ^twAah'F+jŁ7Fށrq#9޶kRjB뿲O0P%CN2O)JK5b_0kύm^1#`PHܘ ^ɒMV.yl'+V3>xݰI4?yLPZ" cljuLІa|63)sZaItSr,S"#>>7:M|vQ>%뇣qLd1F`}4+tdGsd8as _8@ۏ^;+?N?|> Vu0G%4GZڮX>8QlYqKBF4,Fܵҷ_Vx%,_#DS,\M nZsШ+IX8]ɃD!aHSAО󂄌)c!gFD"D}?7{ga*cwM3,uoUaqrSW+7G+6L8BpɘXOjI-0+r|M7G-'$VV@ɞvkdd E ","$e/ZŒpr #Òc%`Gpr|")N EД+ jxx~W*IlA2b [yeׅӿ, CUq=BzuMue?օd jYչ \_20K+SuU+enx鯯?=?,Q7=^^/wǫΖWOWK󫁶څvu{yzo'+ʻ ;%FtEy,8YD|&Sx{:"d ?s(b0Ԙ tLID$ɶ<%.%=xܥSpi{ B9?q< 8ds -' ;D3@Ϗ((?ǽJ'zLj,f$H!PK+v/=FK[Az 4*=(ӘJ):59t,6y9&T[2sgs0ԑs+se%ȹ"Aڋkl )m vpk-3r nkj"nm m!mdԃ;`Px eKV)vkFxPiyAeg=Ókws`)*w1aK*t[O*Zq9fqRl4Z2SP>E#J}*G+hTڝ ƜЄ )D*:$>LB` L|eRU,Q=ʸ,ty  )7dc6Ǟ?p(OQNK)&L N+q5Nb $JTF]~ ~ߎ[CD,:JOY33#a72 ٥C0S1骱TPz=lW)luaPp0Y /Ջۢs3z[6z[1lfWZ|Ҳ7칛Wf2g떅 8'.%^Pt}fGK/.>g.\m`YHR[Q'64/lk?kSiw;"]6AQ٣mٳڙ፬J̎RY'6ӻ Xx9G;F} cx77&/dI=Bf󓄫w=f4,uZ'\Y,xezz`:|n\nv׭ݭ 2҂vo.T=JvБ;?3Ԏ9F8'm%!͠e_N|t|&s[7|)oOɿ/[yOӄa4A~ݢ5zsos_^__0iv9,Uo߿QԫQ-AK8}w`]^݌jtͧ1jIE(4Ϳm9{m=Hk_ҝ^QqXDaf6,2 #QYS(G{h#\l!u]8Fػ- D3aT cNGQ&bxPf "^( kGۃb%å:\Pe#5TTj୬EֵhI6Ue i;)bf//860pc9- oyCg=~9-twO* [HK'[A"FݯeJ51:D?_]v[w=*t~~'{_QZextH_ns\j>T PUY3Lueid%Xp\]^yBp{ĸ65TS1-Yjg鹟W31HMs]NF.2фִ{\Oa_a%i7K oYJV答.[)Dj-Js>#sۥCl8\yR>B#;̇].{, U\L!h=,rHfn2Vj=4c9M3@{"m@N$PY&(#`Mb5 MO<0hI+; 4po`g% :a֧Td(R ST`Fڌ 72vmpIb Pe .&i/Xj"+XNkYS=RˑZku!#Vo;ۤOOKjÙҜ0i[ϭˋ[A$I+F[ˤx㍲IS5֒FTIEFrZ;fopԊBjJ=][#% ՚fvӣb ` hÁlFO>,qF"LNjam%7 s%W¢25Bq~'ۑal 7L/AZKv&bEF=/ f3 zQp J lA9@bqΏXCIo#%vr&[׍(k^nbf6?Vsɲ!9Awb7em<及fX`&55U "Y$((dkfRQ-D=<*ځ踵ΧG&pSYE|Az/f.=@:0DD3fr5&(ը"Q;j:¤&JS3t) wyٕ=Zn3{_4ջ$:|8X*cmxTD3OU:h BQ -*5! t<УZQzYekCA.nr3 ~tZGe|m L`d#j'nR<":{H0Jлw}:R~g3zw]E:l ڶ%5CUY!APK$,!j&A3vVM0)C)PVV 9Mڨmpud71&'3XxvOo\ _둚%p!0\Qu~v]XP?M.AGK".@AE?|R)%N{B깻| }m sg}<9.cA}DEeeL.!yW=D2y{-ԍmf$*ͯgX.wiy9VsNI9[lT,S$/fW.YYK!|4?I$r~+џjߦoq)V-ۊ[ .8+g6FxjK٫t4CAp_š2MIϿyU2 H ogwCCO$dm4#E1?۴j}!4%M}B9C%"Z_*.r^W/OeMm崲58+Ce] F6)\6`kDB` ]"#`Ԉt&G#9ANzL^$dOހchi,IGN ixLNc@)>;ׁyߢzu%qßK^Lf?7T vI ~aإ*S씈B?{@VN[6D?rqcŏay6|gI؁7zn~6Ybp[[kggrfi2L ~F) qE ѭ cfA;gI[Z:#Pi6Aǩ"8i6F hi\E:5X52Q^6z <8!S(aP7i]} W׷,Wd;鞔h9S)j"E1a' B`ZKe(M*4@bՌqFN6rUK" WGPB]#.A2pƨ *3.LlGGy!uW,O\.PfrB&R9KgwnELQ_qS O/_We|\DԸ)Ahj|[Et8IlĶLG)c)X1UHςk 'CBh$-Lj LE`;Xu ***f=|2⭏"\wr)%8%۸ʹeThP Z28a E|Ł 626+))vj¬U 0@Xf;9p<%/!5bzI|;%-9@5Җ቙$[N.Dڤbh\.̈xߍ tB Y@v iݟ ~f(SCp-/QAd scY!Dm%Qp Aګ]nj}WZJ0qr}azl[HMykR\u=3z HA|GrJcn%$et/ZAvkYǁljwZ}m4`>^4mxEJJrZPP D4EzB݊<<36r[֢2f-zV%{ƾOS,Eސdr3&O73I9|(=zYP9I7_d&myy*EZ/dsK㌲Rj}x3Rێ'S&53DDU$푪tV7,rċf+O ! K=pN`YEGVbFv ɣɂڧ -V ,>}B\k&"'ąŜ睨6D4jJ KwmM ]o2T߇a#6bB_#D*<8AVϐ҈P3b`D9yjB!F%#6Ok.S%rͿ~Ͷ/lݖ;?-W&AVOkb21@C_^F6 g | ^W@MnV0!&byxs{㐟V*uyN3cFYT})V :' )e.C<z[' E'30_s'?{,}TMT􋱱ϋ΋-p& 2ACj5h7q:Qtk ]*`Z"x)&wM68*ɡe݌Hn¶.~mBqth/.[Rjz]W^#DbO͔8ėghBƆKmv;ͺ3HwΐP{x!1<ؑ'tZּ̃|45J=DɈ5*,Xߓ`DOz2uQi;(<0 i5Wqt\qGUytL[#8Hp F[x /#P*(BWV(q%BM~!sۈYTMvtzQKO/ 83߄Rc|N?dE],沅 :$x#Y$FFe'^xddNfJ*JEak\Z1bD/:0:^\=)#e8#OdëNĚ6Q'W<]tX=jl2*% !cP))2BjQ,!uGɹrDτ:d+>b@ۍ$Kmf x'9#N4Ry `8H J`hlrk1ݷJEX=1>Iocu}"q4]7:ϱQk񣏽;E&"E8-5?QFɧ;.EE>i'AyMdkJ(^Rf(5G49bka,vIszmq"Z|l)%e`uj2MR S|a>pS-E0<'xf!ULМ ,bР͔)p@PӠvs}k.B 3}&$34 ҡx# eD;G4v2r o5}ejrjɆ+2eHm:6eiVݶGv*:c]6w_%u KL[|ۧFVB{i55 Y uِnVXX 1H!<@kyQ fPy7a ;)~M~X%F7y;YJǤhmNrK7ThMeeB#Y.K.c%K9Kd1 Q(<:8\o:7{KPmy\ڞn d|T r*7evn wo"hƉz{8oK ` l '|0/fvQ|bRhjΉ8jգ/n9.L읹? !8dO=U"eJT[/SⳠy>:[!YVieJK 3@Qbwd |6LGaN?zj(5 S2kl&k/#hh0_=?IJǔ4e1wMX̨Њ>(cRAߊEZ-Nyrq]{dX ȃD;zYU{uZ8YfޜV4~=54'؀?kXi.oxU9LHn 9iE6h1:C̠Vj}rFyCY8 NFD`ϗQ˻-ŒP:.1@I;W\[RF-J>r04╼SkQJU8i*hzHټpL0+k*V bɎ:Yw@G*zbCMGD˦Qq!fqUb鞉``]/-dDS}06Kؾ !}86Gv|+qkeNw-^6ywjoEXu7i8C\DK %B5E$O5{y&\ cjfrsYc(8(󱽦`#{2[HssI9~%O4ASrH[.%*urim{n%SU)V*~Ejo3O3B2$B1e8' 4;%A8Y$p@Zi9;oB ʛ]qw~~9ُ*Ov5X:N3|`t' Ζ˻Y 4Gf_7;FyaƏ> 1wHNE@_h<%B|Oy[34]X_'><\&[$#̰bN.~.FT1;Ez34J0C`Y %ZeI6֢K/Hq6sH OiYƝ @&'[#y[G(}jb!@`*SG|lk#!nzC:_+\ϭ%v_Ż_킢:H<`ֵ19Di <*$m& W8RCq:kŸk'eHS@)ɼ0}/$C#uLV@13d)Ht6+Mms|H@sX?I(ia6{-L'X.L(^>(=L'" |Oan6!J0.ib.0|H]*ϒaLybLTTߏ( p"͙D*<ฤQA\k=0[s`s{q8p7_1zi&!V9eB+50%H0eCoP(tSDhIMᆰ$Ғt4WToLFV<1䆆8 nf يK~M~.wMN{WqǤhmN:l1āJwQby,e%=wЬu$/,CB"պk"_IwNr1J* &"؇\q9l:$߮doɣ;gٻFn$W9.k!&$rt챰H'b-YjI-޲l2]TXŮ@ԇú|mnޱѱ"ʿ^%>.'cN'dwc~\6 ?ײėQM{ު.[EA(z4*8>8-" ڏqo{ G c9WBWP|qDɎט~)Pnš+>vӗP_/3^C"줪g 4^ _;']Z尓蔱kM v.>٬ٹv0,{}OalR7yH%>,ǯ43dJ&L!}AP=AMmt;l;EJaǰ8ao ǎaʕkC 9 V۟j$եv`?pEWî~ZH97Ća6e3=k-(D~ `}ޢ ph]\2LCEܞ[}t/?9ɧ?c쉇&a];鵮wƮm?ӿ_ՅY$_oi=z[lTԐֲjlޭ.!Se[)tz&,MMIyjNj<3Zzo)wsD|-_Ǭh/~6ա9_8XGlůטюOXGe ̿P◀P5rMɞʹ@LfϗhKp ʯdNocJ6cEq0Yl4ا# /Kpm@ Ch_kc*HÀ_0𗰾 J4nj5/ǟ8!-$SDf)ȝ9՟K tx1!D^B.[a{⑖C"G "ẗ; Ss$AGŐ΂ 8q*B e3rIYx{ r@y.։-p)+i iкQO4h8iX0Hԥ6e셒I D;)uhd}Ҭ&`;W`),CANӠ)9Ra%. H)1O5%0ʝ7i*8ǒ6j ҪzB4qO:H5_)c Z@qYK4h A* x4hY?6%Fbu<\*\NT?tI!^oÜ!,(fJcS11UZBNdL:Oԗk%Gx2٥e+*,QjHDRcQT$1B9 Ab,(X!!hTG"DZAn 8}x?5ՂpscgnQXbmv08fxl!8'@82cj s 8UVń8WĝN( &փO2z4ey*bQbN-2BAS)*\rn qu#BioMIK-v&i[-L 4ZpCY(jF%,uf9pb"l伐=6 BpCƅI@Z0*HUR  QșLM ȡ9ʟ?^bYvia=էGij5D %ni&ֿ@>|%l.>\W!B8Or~gJ?>ߏ'_noq!.}<_|B3]d>,kBrl߰ YS(}q,@+ñM9̆\sRp^l^f)*r;Zw0\pY89u[pW]K1*2oc8Bې(kpkt / ً&+󀜒NfIF;@5"L1<<W!ȝc}|ox)]VLZCA]x!cE{DvX9ۋw"gLaH^| KkAb*hw O_|CR?Tw"~֧r)|l5Q9u5Q9Ç&*kU.<Ď7rAUc\MH&ҢUyw| շpӝ+XJi0ʆ.Krt&9Nh(P9$ 5ˆѝZb(B㇥J]*MOi/8#[/ON_FٿoVW\v=t6s:DT"5IZ[e:kqZ11lMTA7=`jiˣVICP`sLhCFa%HS\(kԙ٘N(V~mBBVR` YČ˔p Pgirάاly)XYmS3"])1eO1MJ  ºb+ iy,XlO&S:AnW12BlQ Ĩ)Arj#]`%#%k:.'K5߸tkv=CqjWrW&H~ =d% Qf(tx3Iv^d%,TJ}T"$'C<(8Pk-rx~*[0Q8GN\z.0!Za XIytҫi)_HB>JwxPC`ĭBb͘g٬;m*)r0ϫBe&x[q0LHDtxzbjdM=3bY/.T#wqAk'm(A&2IБJI6N:wޭhF8AYIb} r;xIT S 4vIA}jl?fvHT|Rk>DmY[o(p@bn#^KqK=5ѫK(ǪH?Ӈ)&" QLb;W/us5?  |.ߞpmBW- P]iAX#ݟFt8).O *lw8It-}RswOsvP8pdK|̳y L0]JKH)3QCJX B3^.+6?҅xEPԝ"s|),)iX!?up4 78gi۾qx/Px_/4i=MN=I%?9tu>Y+娥XK"]~7ׇp-88?+<7|֥8B?mCiKY5`AI8o=Oq9gO:CQ mjp|)X>j6ky|s:I̦sF/ D\#Dkx :9rQYzщ{|ͻR;wE_\gh ϕBk̈́=TՆ.i!1B[9D@hcKS /O8K ( %af 4!M"fOS#JSIRcWl!|PȻJ@Too4.  #- qI~aۇ^^r0w+Vv2RV0)Umio. VbICHjU%X͑toTBTZꐢԞ} (d*3lT-ƪ ek§ ]lQ" h5Id"HD%9dSHʼnDK-c]:7;\;2,T/7L;ͮR Ir[)2Ij(pOGHj@s[ D#bD ]b Y^]j Q1x?JMUپBo3~eb9x|;^AƎnG.X㌝kw=}7K295vO1tnBo޳kd?{ ;q3Gn/~0g)2繤Efgx@˜dai=KS[ȩrIѐֲ)ӕ[] BL=x>Y̻~PքV)ډ*|䡭ؐ`{Hȗs1a˨ؘ B>A-Q'6=oӦi86͠Ȼ'xWm1h)k mAﶭTTLu?^V!W:Pu mG)*.ig eÿ"K)lډhV ^`3jí%7%pl/P'8 -Z?ҟ_]+DU:Vl^+ U#R __sTJȖŞ^3 1|zU.KsRNBl:-,1`".a M%R0pfɽ祌0_e?`!D\YۗW 5m2)<ǔSןcA5xsNkn'wZzbiof*ɝQ;?G1:]&s:7ǰN+ebxunmK4lRA7d>L*v-+lTzo$L_;*F8=\X+̹LG52AuyϊO*\hZ,R0PRB4-Ѩ-v0>4=sAa4; C3cqC\'P!E<~d *ΛR]&}$g˓gzw.z0oeVh}V:*ٴA{DT ~>G,:@II>l&@|x lWn|n L g(me.+f򗦻.)ո"e&E25b ()%@??à,k.A^G]g's!qLg3< AҽYjFp8YN22:SbWt. wpΆ6h~`їR$g͛H.{mf%鶣!+ (M iEM`D2GkژYb6|efV~)F6ǠTpyȈUV")+ԛX{3VVxܛ:B֔K6??*&iDzuAld!7>icC<k |Ӊ+ƣ䛜Ikq[zqcG9"V˯te#<6 W??k=K[k~K>WX(Z }y2>WDTč'gJ!>Ԗs@(& O>;dX`adOގ5 Yڸ~%TkHluql)[C+)Uq'[zF@ˉ7;@pӊ'"l ׬ƱAecW`φ}ص" scYp1RK,"ji=(͆L ?'7-C 1ȹdkI2JGS3y$ZyL6ow }y7?A-3gYt؄IFmv;Ca=ҌĚJ! _n4}hVV ضP ;u<3,ED.p`5}jhk&4fwn쿽[_:a'ꀲbz#ׅNFPoԏ9jˎC(Ӝ <00qEifZF:p&0\D2?;7ʚ0Naل ]0@ExۚۉG4a8ۓѠN*W:ynia,4yuW8nI%0;.GӻxO%ӌ7܅7Ûɟk/{//~OBz*h y\`yOe[ٖAGNvpx䯯m/BZ~O.ᖄ[KkZNw?!xFc 䄼Lgid$yqEg̥[o^ cPiZWx?yp`>8 4{q}Vebjv{a̍ש2aGou.;g@dnKxˮ\znN'7n|;hb@&;/5Π׭p  = -}`q=:-?E  @Ntڸsf ,347giJkM*wY{5 6^Ml\)KF|\Or`ᢉ(cPcF} zᅢ>pzI%ō_N7n~~?nP:}}ЂVߜuL3-x4O`iVl S\ [0,8[P^vz (o0גp:S B9{m.7~aI3tWg^HUq{vum[D :PT:4 7n^ .HYEڧW խ?_{ <t  ?z鍿<5駜{/OZ?coi?ji?^C5_5u[M{ϑ. hu8ȆM:`r(vq&hۿ@9o8$ do|iyrwGШ/lz1^wF qϺ?N$j5{bwKdDK# ol : \{ZCFOJK>db$f㎸3yZhj('tX0Zs1p#9#~ktGZJhY>oğ;\fބǽDfzqpkg^d=ǽb!!rE+~fB, #}Y>f$:ʐHX鄙qkow۝Ӹ{cSDGY6M1i2Fu3(lFL-EÛV Btc;4CKd)b[12c€ lqHD 0hlhVayNQe&#Dy Ы nM6j,#0c3@IjB85%+CQV c:D6TR’ʇㆡKl%!h%*uY#@""GKQqsA 23Bj0{KՌ`VKY{s$-j$p2zb2"'ޗ;]g;l~ܯg&5@7#B" #5( :Ke8Z% (y`cƊjH1i'U[{+u~QV,~ctįqc K7`1 L%YЀJϥ[sº&|ˉc3[sKy]LU|9f1P5 {󠺴yae}PU y]wa'Af^}nrn7.M؈Lގ4/n%`݅ Q{釃;tZz6_I\fNɨ*t~b"!uhg)8BLĽVm\ڿKgT+ْvyҧoY9ҧ$g;|=,rـZ*q6"b ~eyye<Ϡ-µ -}42|YϯV'l+V' jaZQR9ـ68(/%>O3v$</5o?L 7E=?VoSdP>gܞΛl=@uKړB9,.l:Z)ZRIX28Q: Q E8-0&B40\r[g9*lKK @v_^ ~/"1AgRw8\[X%!<a$a"$ uA|8U /!1`w%%"X Q1e\PjEd9AbK5tY&ih H0)"M4&dH+U{` *ު("6l&&zuE1ZXO$*@1£P3gqc c͌a!#eP8 iJIJQ=u]@ Wi x b@ >Xj IB P"`i%ϯ% )v޸`Ec~^h!7ڱ6L܆6 LFAd܋qT*9$ 5'wA3(uJ8k L02 `wّ3o1U)ʏ^PIZs UO(`6{XR0C0[P:kH0 3{DU D0Etg*Fp"^+8=è@Tg* UHnkΕ9pJ+E'"ŲyJ-NBqZ! ̅z>ylj]v:~?"+0h3zJWQI:jⲏ>>q1"P3&QALb{-ὯRFACm58Ԥ"-z4]o8um fKM+q =dѓ# P\l2~"+Ç@ X6Y')-(ř%%/x<,S L&&ѰS0ȫflY=+P4#t`ޞYA9|[3xq<'[b> Ok=Ozz.(IM_orQ_h0 Gg.g4~ /%, / iq&wfs2CL>W RAWյ䅰Ҹgި¦#J4OCx,)9fcEu `$ual^y]OxDlվ+ksHfip0SVUxi'ۑOAhNƃd8J,$R-ets(rщ#.)OԱoU@Q'n;0@qs|(؉Gs?nPBdpg@LcVb[5Q>@KQЏ'hkbN!''ۂ U0@%c$8q^骁0{ָjucSTŠ?\gei `G=0 R198 )ƊZM6k3DeF)sѬ5j. 8ЮiRjJr7tI(׎ L9iMm| 7kpՄ7&Ҕ)g9F6HS! _}% 6SsC+ dSI~<1tʄ'=0Kݟ37Ȓr"n-w*dʥ}:y2U]JȢ˴;kQDe^f-/-9whI2`(R](ޓw&$~Δ TnY[UNхRʑĵPz8})kĔJ Q tzZ]=JY2O.Z PJÁ;tB{Ӯ079 GpBP T )^/z>OBObV?~NFI|7yOCD~gO /v"5#G4\lYWLϿM7S)[VDo!hZЏ 9i(1z]]d|uX 34 "ΓΥ o@&;qP3+ ":Ap$-nzS eͬ^TxkU+xaz{157ۑ?н4&a}J!^'3ѡsxid+cm <؟ 4i/kZUWoxߟ}8x |6_3(aE}pm_G%a98._'E}?@K9DgOo`jc|rp3{ @$/3}~97#Ƀ;lM6@]s̱ʹeFl:7I!3 g/s?)y7OdC rϿMoL0ƃ/fՠ-Awa.^7[=4*yx2(RJ @H ىjb0&=( ?ų4.ؓ%GZ.Ɋ_*y>_26=]{gnU7@Kxh5DwK"m;{i˻w_YŇv#;Ա=[$_3Bd-1 ގOPmśח r fs<3Cc+$JyI~DE@Sdþ$me;d_*y+?K2YNm+MTظyǁJ]<~a#3\32Fk-BDoWxvbwghA2+zD'9SFGd)m&4 CߣSeGTR U7O6_ȷa0$J }i|ΨG*D59i$ޡS$W AZړ!WE?6lmK[z⻹rAN r;/GM%ƛհҀġoWvF@9n;Ⱥ͠RJ땺_ѬH/L*-nͥ*)8JK/2.28vMQUz)bj^.okˆiW> G> l6iSxVoSZMeu/5O;ypo0A e'!t@q_b#NJ!ߋڨc9%#X(+Q7} > k؄~P{BB  \xFZ zP>XUn(V̽IVJi+?)Pnhl o'zVJFK-5]_Kc}m|hF bPF, LD iy(|WȄyD%|~-V7Z|oo 5N!ָqظ) _qCa`CJPJ1JR=#TLwD..[i72TSEf~J4<(hK>-8{n@ҝp9kCP,Qf޾||JD*9ylY@+z ۟?w3yrHtdUz 4-(k/;]EB7Ћg?/;JonӶd+xrv?1`?|f+J8W)J"K i.1[\f|`)j H:qX$H f`F OG)gaAEDH@im4J~`@GZEL+EV2K(((ck!'Bj6X1IDG@0Da)Pp i<jK@_$ Je&SLyT oj7|Cn2԰/{Efrgo ְ̺EotwFoty+cV|v+ֈivoaVfaPgpN@ 7Jm!?w'u{Om,Wܣ;2\**Efm5lz','nvcƢ}պfa`TxպCU;V㖤-JֺDZ7} n@QFV՜@- 4̦iˆM״Um55*-8'*(wr$msZqeoxqBfۇVeN Ojʝt;EO0vhh2M+1{;](pZ};Z{ʏojEW1+?Q'S'$dV#$\nUqiо *f*-@j@EFCU}ðf>AT\eR x.hZн8JLȆGcT`)u }eӖ6LaW]io#+$3R߇!v k'_b3=+yǮF[MR∇98^9:X…e$!qb3Dc H}n]^ CwԊP u{6T Y !N./,0rYY*6i8ӏČ>$ 'ao3P&'*VZZ{nnDֳ67`kN{EF|veS뱎n!ư0&?;X J]1G5= uн|uz SYFE>,a |4= n"-]+NˏLiQ>xE_0rJ󠵴 Yx}^7g|a[^%^g}<,/@%VLeXxJLXK$]rCNS(ɴ1=g"Ǧ`hg7ll>]NUЮ7}%ź!ʼo; cf-7/v)^K,c#[o:;OgYe/0Z]zA\m1ǣdգUJ|ٳn%%͐ nLޒn:PP d#t;> '݊cJ2fȅOѣFXAߙ o7饋VQly.'N'~Jhe.Y9RDc6DXSY3Y]=EvL5F U# )EA wf5qe}lҦV:#rlߑc\5S(Zh,(aD4~CYyXD_y+/uXEيB^+?@W?݋]IeU6V*Aq`Ax~Nl%GU]2n1;2' rU>d0) ޶0t8[7 1BȒ!n=:{D%ox`"t܏ޑ`' -BFDٌK\-ڿߧERj+DX,B™l&-SגbѤ3´UY.CE{x!Hư۷k#7kUʒA<)TW x\ pyUXUIC+.(`2E]7={YRʓࢆIR*ê*̅.nTXκQ ϝFkNX~~S&K!TICv:VI*D4|RH0(!;?geYqE(o:O 煟sq<ݜ2c.en()#$ <aH;DogHYJb+̽ e{v'I"@ lH# Fϓ~D42nR.3}\!A~_^r#g=]^O9gg"isW9ppj6]o?[HSgx0d+Jl@yFRō0v{"@`[7ldXUB{h4人{73%XQ`h:! ͗7dMtk#3Fr4ڬA]BllMK8)kpc/x*s!<?3Za|8=h X?~)'W11Cû /` F>~줄WY۟ řtk>NaQȯn J c/?=j]#?,յddd]S7UɆXZIhɔD7)~IbbyWAHd kUĤn+"~pS@b\z>Gx?,7] >(ӇdOu<?oC6swt}gfƝ,=&y z| ( Ι.ד[u7y5(\( 523(d pׄU ;TV:bȉ2AL•9[ \"]uuh)2i6U ϝ7^iWV7ՇI'hޟi^K魏ZI?F_>^h0R y֬-B9RKP(OFw$SIdf1Gq1ư: 8'Y rkgz8v}!y[*'A3f"MijQEH|gyVɏ;Πd GQFqFPپ/ 7LUF|>hMq2e_|F T^\b-bźA9"~), Fo.Uݙ;aw?SV7v`'8j cqVi#5)7 `(xr%׫_z5v3BYIxkFK9O-2( .oZգv+(EDcVs<oJn,E"Hb)"ku"-R&lY)2 e{>i\Si5Px831h},1MiS}9A9[vp҈em)t+ۭs60o(]Ih I,OZծr<֥(h;e2m1ER%vʓִ3i/@oGtO3-e3"Ȍڧu츓igS!+.:!KX I=pr4̰dW.cf6gt.o﾿ Ȏ.S8NuLpw8N=_E Z$N#İ95Sa0=c< zAv j()v~G)FRs[~:EыFS#RñAi[I#$)ISEk0(dWLJ*pN##QS[.%qzqG܀=A7Ǘ~Nja\FDQi'^L6Ǚ5UgW~(5_z, ??Ua"7-vb4f1;Koƒ=eOԆ+{ǔ{Sp+ ڪ ĥ Jl Hmj31wXYu4@qSJiZ!SZ Ur3)*Os9=fBbh3Ee6Wl9~U5cdn:D"c@ Jشҙ.^oW$r 3%@&9+B&ZHh©?A#L2E p:v(K ~+S- ) E`HtN2SK֤1[~R00$S\"1 t$n3 :Pngmw7|t P(n9,B0&2?6I htz?Bjli ܓ[RN b7.VFBi ?|| @-׶cKtw.o$Dx|G>^O/x_7"o!J-:? d:[ WqA}OgC3L?̇C(+~3ĺ ?{Ƒ /7vadcv} J\ӤBRr栗Ҍ#c fwWUU;[Œ>*!QM&!T.{oϺ*!Uf]s@jhT,&UQ8)Q6MLFJ7u,g$|Mj7MXVf_d%(TޚWpxفDkBCX(`sK%&%+*Ut(V >(u48\h$ Sf==)`YWZtBDyLA'ér1o-` D}5(0c |oz)MEv|%#-(@%za b'xI tPI)HpFe.NThлF4kĈ# HDG(J"^S&5PD@ {7μMy h\W/mCF7⚆8{xKUpwc>4 n}I\dGꭃJ7l}VO!cIE V"yA;LqNQh3՚kߔNJ|?z[`#l$fffDn:^!baȳgze1}LGےnmqi edWJV.s($'I!y4Y+xO`)!0k ŲSȴc .K s_J%!wo  -=cI`f@͏ ո{8ķ!((ef餪Jeܪ+$0 ?&)ySG{Q)p#2%V^3 ԃX}X7> СB)E~ۊL. qLЌ?RT5ecq=ĎP|z~rV#nc7ء6;p\-. Qj늈|ې3nڂ|oOWO贷+؃ʰW1eHs.7m?sv>Q^`xdo9W@7k/M (IPſl+;{i4p!xkykfDߗJI] B ,E64 R.}T]Ϯ7';yErW z]"~KK};`T& 8l&HDlj|~³8L.Bn $yAe gd(,aXw<7!<*ݗzRIv`[[.݆ks_ =g΋q4ĥ5c>~:Fb'0syp OW`gmNT(Y$qزˎFR6w멓i8n۝-N#-տ;$1BH ΩH+֌B=S@v,WJj4~W (WnMw%Pjp :)v`+f\T/3J&?K["]@(s3-K"%F++lۮhId4bsN7jxD]ĴXmɽb7 c}Peَ|w/Ì|lw/nA2t[܀܀¹lkܾOā(Dg/|/fq)Wȅ|knZ]+*e7j@V~[PвҽU%]lzE1ox90 1z>vE8ou.|pt;ǝïNcogk>S@K_'? 7O$!8%J{AP!K@D +G0" 0mrn$Bop6%~I_.KA=\/z䏋xh1>}vG?Z')z% .*!G~}sY~\Evr~vN>~|BfJr. u~=+%*vzGn8"@(Tݣ2~*۔wSގY&5̔}qbnNf -_Vm=[[[Uk ]J4E)jQqRDD"WҤP,pMAv9gII36r4g(mA/C`HJHe,QK*V$M)0&X_"8}4Y_ WTHA3 ]5N}IyBa)ibF 3O,aRKt4ePj6`oR"QuUqaq6/y~)T-:OqQ8Ͼ.@Ekk* G5$ -UrԚ]˼n..\3qw:iƗlu eeXzICy61%RleV:#0uNſZhפ&kJWNcIii4try,xء~jPB[*wxK;TBM3K;/v`fe!EwBJ(r8;8A}gwqMˋ_<8;~b޺z 2j2Z S~#R6Ŕ*TLEoj"FȜ3d\ =q,T !4g#$tgVB r?}4Ypy<4-/T-7|TMFTGF%UYJ  Qb0Z<\]\y߂hGobi@fey6j6g[usEtr> VI(az8z ; %lBʼnPmpӠ2ퟝ Bcf9ធw},i|,YV܄絖孟dkT6QIA?),e #ý9,6GGlx0Ø<몣Up84B`}<':hӕAц̷Voi.'$Ab}ٟH,e͆.9W7їHgMYNK>;",A3>2mxJd7_u>wO]|t? B6ڤtEAEs18MxW(M&8[Hy>Ms2}KqNӁr<pBO.qE|7 FHB;Eܴ^1:LP.nGc0j5.1ZrV YP^fbVLs.8ubV s^ >&`لh\6 `۝ہU+f*p:Qѷg[k=WRX&/&t'9(YTBDS;(DXaw\Q 54g %!1"i7ୣh9> (!RFtZ#+JW^K FPbҡ9 @@J&1&"\ -)d[ZvWؔn|h4᰼fp (s%T!!|8,\F&O&#a g AftɖV7)Y *YjP=-rcEx7F`B!k%K:x%mY˅^rYљJfubUEb(€k.(F׈]4h W/ ôPF2 ~W4贪"vnYͲ41x ƪ\ȑT*ri=z8)+UL)f}=k FEJE(5Ց@ %҂ߠ0^[{0&"aI}ѳ5At>NHhN|wݐ\:|üVip>ˊ/_kEs"'il`GFѳ#Rϩ$%Pz8eRBǽ4@ (kxS%MHz* ΍[e׸bQJ)SReS ͽEV-Y4C975Es&>vpރ/ 5RifwK]yd TY4 l+G-_QGԯZdN@7݊V7a+|"d9}:@(gTcANCU"*=evX+mH‹%!waeؙu=<B^"5$ՇYwX\[Q_DFFDfDdB(_B1H &+E\G~`e` 0ogP)_=O87&s8ZXy $:ٌ;3bHX x1Xwc {++ߖPBۋC kT}SO /P"[I#ޠw R2 0xph+#e [1qT:>tO0HX[@/PuA]EE 9q÷s,JLI}8T, MZ $/Fg6C0h_g؇r=ۇ4.۟'yڈ-&I -r2^Yᝳv^qmƔ@\w:ѩՈyP>O/mh5TG eZZh[-At1K枔Ҏ}F2;%R6V{V;8wQUZmR.iSvBB^&-DoϨR19h>6ңđڭ y"$Ssܶڍ̝Qb#:sn猑;iHVrM)Qp!BjS\c4M+H/S'Wҕ.ߦwX ;K%=' ?K\@W?'T'PO U ?^m}rw{ Ay z;gjGjlo[ ݗg!qՠ35ϝI C{? \yPXKn9̱xs3^1c<&-'Ӌ1A`glW2BwRxyϏA[zK_F}V77v׷oj,v;'kVLkLj=xGk@Ϫci[^[IM{…4(AV]PJΉ,ewOB>m,FsCqV0} 7D,{_ DZ2Ѓ//@ePB$+5])BItKT'yE5! -V*XF92chd`XKlyBGF%ۻ~!@%;Es3y%H*fB~ٗ XpRM _Iھ Uv>4wdǚ&N K<Lb$8)9)#: 25"FÐ~Z0-wى9UzPO~HYrZ{݇LOC,&ocVQwዸF̗ t@ƌr'S1˧VG6":ewr%fϦ݉Ȋxs6M1 ͈s{h5L\+C 71;Pi?jGQ);~ t得6]DjyI1,$)^6n kyc52mɺY6GK,Ѱ3 e8N[;SG,Hu@:8 ;! +oi5Zw"S9l~cr'Ǎl8I H76"klPTa@68( ^% ͈PCe!Lĉcq(h0 Deؘ/ ߊ/5s"ʌ#Fe.Ӓ E='C4RrGa?P \+DJ]N>N@wDةS 13a@0k? )W!PڜdB"s'+B\)Tq)2Q$(2u븚XNJSM7˞zMe%^oJ ~ap1"8b^!qL{gRhqi3p-+gg=lK>im4^09c]2ȶwGI9A*gY!NjXǧ?GdDI8^U>t/qEO$B3tCB1fɼ~-8FY2޴T*@X} KBJwwꭚ MyT@TV?em[ې '&".)GIڰ)t [i_^4KOm0wgsZlݬ_{`qzK%0a `$  ^ Mj zGF 1G1Ho^Xaį${ bwrFa-]Aq{3ZRIvUBY RC66 p-;3}tj0}4~SHi oRMpU}=< c{x2{xdA{~VI3Pd:;=,.jCֳwg7'~:En|ύunPW*(?6yog졐kjЙ|?:q~ k*t{^n3}}}}(rA8n`.4 iU^J\6yH?7x9y0BM6DR0/yHEϓof )LQu?tVajʇ%z{^h@ A&N(zlj)ikG[qЇiˣ\G>o+ev1k6#Y}*ÛjQf@ƍU˗YQՐK^䑅K*-UN;aM?Ņ)~QJ[2X@\)UGt)J(us4͔Cgqq6RJSGgc[q2KP"۴]|zmC0вe7nyZL|;2wt[<"u=eayLGOq3/)ΗG ҆[&xHSB[]w=J@%h,q*WG3G!'B:RS*(-qbVQjʖKGtb10W?|FjnNT 0MEOtcB@ܕyD jpDtAC!!1k0ڊ:%lSg:%p).@)lt MFUկ0$ҙW439(B&9"-ek[ &tB%MR(H'Gk5ו cV5pBՆ 2xu2 !Z2!M B`*i!YˌU0\@ VMR:ڐTbRE# `C5*`a=D)#rQq͘SPbP8Y@PVXp 1 `@BZ)o&5-ǫeZSKCYIPP%N)"xZ%Z!v,C.EFN W 1Eŋ"CC F! 6EQD]15}eހa.vFp+9bT0cBReP3&áW%-04~#VXGpfpQX cvHn~F9uDF67nSWBp4g^AHUJ`=+R~@PPBD F!_jWE PY+Y}[||_zzkq=ػFn$rk!H Mx#?by-_QxZ/-vZxi5*zP3'U> _MW| F'HOV2__LlG4m߁KTPߞܼS.A3%縅Ly5hbgxޭ/27=|Jn#"AX[ Mp!e" L_J0N6BK\ B A@o2*I9 *PEiŒNn-dKt4Je _@讼΃'v8D)Tg%Jveot\*éFSm"FuqQéB^:awkb+[S^Kݚrް[SKy^_8&+l,R* h20 "8giIc>]=dTꄠ꺁cK)AUס%cPy?ҩv>OG#WsHE{|3 P_j&ĻWLj4}&.Ρþ0nw20?GecQC+tۋ47;fh9tOofڥFQ,5[^q|h32\V-,Gݢ0Їw>!anٟu?ãӴRm{1xޡ?Yi:vP:Ƃ4:l4ۉNNG/n5ybj:~ 㝷O!+ҳ~+l:/\DdʨчIoj7(=sDVA蔎F/=qطv+6ݺ/\DKd 6hDQb#:cnK@JnIڭ EH8FiS5ib1v;-!Thϵ:/\DdJfʔ$Q_q/T3'%Z}泥͜6<, ]ֈݻokWͦYozDОB&oۦ,w1j{mr`jy>̾^*d#T0Z49ll#lIlog; q<ĝWtY;]WFwQIN*(/P '٭OwU+Uu߁6%(>-0-:`!cZ/WDzcSQ 1g^~'֟a 94FXlP ;jO,DsW&rF- ޳;)o9T[o80Qs}N/URYs=-JaT\iiM~3qߦ)-& J]>N_+g!g?yW3JU U$0^s)8ompΨ 5u7SK)Ђ[y䏷oS,ké3y6-3!x_04J%Q#lHN=iRNj#]ԀD;@qRx|v">i&6uΏjWig?nh=nV8-.*[kM@Ѩ5}X7A'hر|+f(ɌR tX낒FΥ+϶vك6ܘ]$ܩ|NIB폳 M&bzv GeA_y!JΊB# ܡ1Ƨ]fkFlǠ!˅.rPeƐ-. n}o6_laY}ٵVqq?_w-@U1ĺJ~b娖Zε^9jcc)ڊcCr-Gk`dhl#^YٳO5Bx sC^&t{>{N7@B>.(RA(R W>/I. @:0 IҊD}~CFgD 桚vkeDT2"}ήK&G4E)H-h/j=ui0I#mȖ=\Oh0- yp k4]Z +2 t 5J2eths:w8 ~_?Y1I3ZzCRQ U!O ԋ}BS9C|*џK'Vw"-{;x,de)s bKj㔳:_ѭ/nM|bus6R8"9:3龔Y_ύ= q?)8?QwwT feㆰ뛵gýn { gSNVNH)g'x3'aC-՚1$5 Xf_A ֶ\d&XMc-F*y5R:(s Y̺=ch}.wP|3x+ , iT]o3ᔮto.,b5pKLGr"85 $9c|C'G+ bZ)#Zh7el襜A- d y)uȲڎ1^":?[-=879@|t^MJ1n( ϻV'biљg~ UՌ8}RUJj&N7kƛ(zyAZFUh$hmb J9%VDpRWBvҁjƗ^ \WؠQ Jfյ@T#HDvVṕh_(-k87Z:J+ yD\KF3؂frD "L Ք4v*ՒfI&ըKV4,.dEb^qo~j}HEm+!j]z?C_N5V*kD \n>~S@ 5wPoճhdUknɇm\oن_ ®tSBłED4ԢZQӚ6*~~Zj|9> qȕ!6AnT;ZCJJЍ+i:@[@S4KeȂ0Zj"0N*9w9`zg9"V( 6:kokqz[Gа4 G9()L0%Γx+d RUޡajo}j4a*\i֚unyLmA(q) 8 jx-lwq٢!kІv G/ֵH$ 2n!Z<ҋ3۱FQE|5C,CGŤLěy4\W*H;%Z(>[Dk??橹,>x֯Q׷׏J_):K@`tU:} ^q;Á&WEtAF2AFb|6}j Xt R_Tx74ʥ𼪙56֔ÂW:ֶa/U'YWJya7M*NWbdZ1%#'zc֒¡JµU:K'AQii ).-Q'|R 鹵LG8nVkP.ؚG*zC-\b,e` Bw+GS44pXmnF/Ww{nm&I))=[d{&DKDe2c "i4Fw [q$/gֱg!;),@ޭã B~Y|O̫g 3VwktϷ|KNr$_ #vC7 "|4''>wqwOf//u0;w+u};y\`gXͷn ;.)|ğ}5嘸@ieigfxCWtqDGaLNECpYPGW/:z+7脬ݲ;QO 0w5MP p VHE\-c$.&B+P8E1%;),P0u>T 1)8"I_Ћfc 9oactqs{ $ ǎt ]v==-#: (fRu:gcC۾9(:~G㴤5.rzF69gsS;ND;3Ť:KٙSI] ˜#[8 QmqQ*`ʔXK# kJxO4a!UjWJr C [Ҏ?nuJS*R!nes;XV#JJʬVELr-BT %ȉn%8o V<d.ni{P8Μ᧨n+]J/Q a,Ki%3_) cM6 [4:bF&4DA/ሎ Sĩjq*/ͱC@2igv+J-n ~_pŹ7w@ҭO6 ͉tu|IBa|T};%_2aN6v>T"ؾh #7ETOOiIN 쥾n=vMi\K9\iXg k 3474FMX6@H;N0I'w?0W0GPyMGm{M"mW3r0PEdVolS۳;SHKX~Q򈐌>J Ixd,$G=yJTӂ,Z0J 5d^ڶ ¸b+q[!$5VD 0z?NO}ܬ8-=B_O_ FWA(MoO fiQmLxX0MKw7(Ŋd!ǣ!,%VӅ)K`J/LJY!JD˪(LgK] c8ՠRV9ƸRY\aH+~HKO L2 JX9ȭiXXUJ^X8* I,/4a c18eMd=I ?iaJRH\^{$GhpPh;17F/jN~@^citG2;JyQݦHۼ<3[`lavvz<9`K_Gv!7t61b"wPcXS@FrD&CA/a~'/%= 8^d[>@\w߱\:_a}=Q|WpuYD٩&KkKV%&P 1 0Ni sRbDhݯ 5'E;2Ŝq&J4r|tRdk i$TbظC$M@79 )BLXBPnXY\LNq4Q09UύF8MhQ3(UluVວi XJ A; ;7Ǫ|oo>u21&f< R 4$*C:3fxU 5šV;30 X!Jd4.["(`vwuDIѣ98@ dg2>NR۟IZ cUeWS sԀ%XYvn 03+CϿۦVZ"{zv9^]&iOujyws;Sյ7:r|y_05?R\cY=]fB_nן1kKX"Ux7WիXkdoV$:4ގ6zȝEd!%?T*uNxrFRqZ3e=WJ OÓ"%"sA6}v:l(yR'NeΖ#Q>lK8;Iv+sSj#^2#Ơr9N ,Jhas$0^\Ϛ$OQ Uxl_H1:\K`SL- SEYiB]d3gμšǜy9)E3,yvɳb op PKgJF,=-?H)hC:pvw#8{zSQFF) &NP(ZpCZ0vr=kSۥ ǬMh *E|!^!-ĬBP!m+ҸE;ֱFh5|PH"=3NfN_N%BH|(u< y&mHBH;r 0-gHyS(EH:My<8옕GptDHF[I!䈐v%!Xq%3hZg_廻g f:z;h:ݍ*D6[m{2<}K @px 8!,yO _f rNpYhcJ5UB2(K+ҦP]*%k*Jl` !L.ztqb"~01$P֠@aRa[][P*C[T820Se,KVJE?]N~IqOnֺaŵߣ&/o|> ~X^잎EڍI{SOF$֙ :puy3Wd@tѭNk"c'g;1%ADŃ]Z/-8^4Z&(,D1z<V-,!]Ud;v颭G^.i2_Vi)aešl~x#6 V[t&#:hÀB,1GJEؙ ( ”"ki(I4#DjG@$6#X#B!fp,32O8,=J MhHѼm"byClMlTL8on+7ݨ>w>¿χ52TqվP][:܁|p.'|<+}b7>b׫,ptd!E ng9mR.0輛]^Uݵʟ!Cϗ_WKBIpa"n>VTJ|mqȶ+iN8CR5"m\=mN0h鯗qmx< |$ K43v.2 Cq^)!3NYxœP999EԡLXv2k^7&Cxᇪu@r DTg<VΡuR@ۄ0LF`(٭M؂b_J&͆O|u03o䵉youYC׭a1u  HAlaj^1'հpoO 9g-W u_;bD컸J f 'IN:]݁ k!vj&0bQI@RQ*p|XmIՈF+O;m Y!M"6NKq+- %b,)@|.<]Q35TH5G@H)I#I!˟򽖡|4hneL]v_?o`yǬ!銋5s0bS ^n Dv:؂F:'!ϟdG#[Aab,ky{zt0Y_*6XR0#N*Fȼw$>OFkR% TN|rͰ'5 5Jl!f ;CLH[LyE 1\j6Dd2k ׹ujBJ[-xd3_n^2_yeUV& _ 'v544=ㇿ~{sK2M|7|+O~R4Oybo?~s6Z^]]~x~> csU,΃.5T~-遫\ߞ|>$r ϜPlkp-~;I G^rp?3 Ī%GI%$"@3Dc)8ܷ օR+Oqn8S#荿;90}/M!8$`lvBM^+'o+Nz`eh 4P-7rtLi]iAvH^ bz@#o^{qh"6B[-;[B 0^X@L7gqjic^XzeKw5<.׾HZY9& c- ^g="<m<ภD3OTE|Դ-Eyj1D s2FGCtvuJj@_NxjHJ`Fw\,\y79{M_ՎOuWx[ cfYZ{ 2h_:ExZ8!zN ; _&R1By$Yoh] d$ovSԠ>8[~?P9( h\?D}p}W"ӱiō8@TNow/U?Xm̜z;InFrAfQRg;$q5Ɍ{y\8h]a(:+g cwukϘFMdnQ+$ 8}e]NVcN*Ў3l"l^^V8¡thuE82@p*08\ȊrupZ! 9)K1J"䍫>hᜰsB!eg)9{NVxtR 9ZeZm54KVJK4f[[Z%Wr{&K^n.]vj"ĤӟLwN":]ds:ߐf)P儓N3iI2R̀ʅ (R.WÑ1g+9{K߳?O>KjgΤmC'EvGek~elsjz4ێE.- [B ŌAq`X-A8t,8)31!#JU$iN]m32mk umXJmhPٖ[6kub  $*jTv;^T1lo1t-b.\^]NVUz|.W.֕F$KS+mrl-~[ Fu!;BcV7cIzxUO1fȵ!.#(D~Lycz{St_>j?_VoA\{5?h?m#6ofOH><&,y=ydOG>4x#m)-qdں7(-DNQ]$,^==wubm,Ne]+Y!9K?4@i]m=9ֶDZ:!N=7QQIK4#lʷi$ރlc"\ͻ`x+P8&øVΟ^<L'{Kn\]^I%WWqh?o^?Svmc%yrpQf1f)Պ u]?{K5wui߭~#EiKm<=*7832R~D%QL8KմRBLBɷnx-\[ ^c9HbH>=oyicE-{\(~WQ@SUF?XʜVq"m$FwU-z@XA\qD9[iXCB<܀(o~-E +]%(X_*/*q4tlmI}otO`wBUocTO!d!^ռ%2Z\BtnZb) su3)F[Ԃ:M*(f A3(biĕwgQ٢.E]d[T+pQySqZ;R CAidb kfs1^O/)!jV] v@Zji<>К(.dV4K;Ncjt}Yk<0U/GiMr}GH Tz}.K 4"zbGHe 9ct&9*b$LY y~n_~R&BQ15'  KAZ 馟Jh!_ݴj_qcɃc3$-E]Q#SۺF#\UFuŴIEF@yoDK:l&P\AZp ye^z7$c⍖3´L F  f5" ed+=jBdn@}g+xwE2۞ҤLPe2R긜^%DM63A.my1z& {ᴒdbrzX@ zjsX|\x7S7ӟoi"!2`Sgls?i rfHȩ)!HS$ɧ!twǃq~wKLs)66^Q /S{q+|nݓk PWN\+rN/O\vH|Bet{ h@94jDe)쀙 4n˅\cJ}* X : QBTudV**&ྲy .%SS{}C! Ig}Iъ|`,Sj4YTʀ񆡍ΓMR3/"xW%bpYwSVnٯsl:ˍ,TFy4P'{BmUw'&;ݨ^v*]',mXfP\VM[֫ @c_v0+SZvbC Z{m е!hM:FAaGH>ƆT#T]5#&P LgQowK+03rʴ_1X/0c F$ҥq 4]YzIsLkIKӥR?Z >=!(]ze+t^\$B#Y+c!v:X^7:F>MW: D ,z魗+譁ԇZnj1/ KYlP:zG G* \qc1)Ro;A U(v -ߤFi4e nx9)_xwK1?-<;~Cnfkg n HwW<{|gZ~}w7Gtbڙ7ӥB}nn+N]|]{90wrP+TKp{Ʋ43ϖa/~Ӡx(\J 9$O\Ww ?Va Bz:(7lZJ3~C$A~)~Gf0,6WLT(lbŵwƷ[/-ŷ7r0EYZ oG᩠T3]Uda30ˈ`V9KwLHv@/gZ"m*O\`dn ]lٱdHIےZbj?bBo f 1iQp`@ !=&U ZRՊh}$yxso-CO,tc^XD6^_G߆G]?|O?F7|3v?dn!( ]s iRɹOdl3*-sgG.PRpG.u&TRԱRaɻ9Te<_H繝&zp0>Z3*Og18h'(P/V.gsRq($q컈{mWceIUZtb4b1Mz3f^OKWSIV1Ɠ-%[:#pڝv$7v?"Bgk8ZAb*duϮ˶ I=Vgˋq6'=\u1~xۅx1фpI-ed58)s">h@Cߛ3D9,>+K[x2xbRЭ zY4.[rЭ쁈@zugHbny$X2JrU1RxQ#r 2υBU*((ylŗS2}흓Z5Oeb˜5Ϯ$-bB5QMZm}\NL+CifY1'[NovfRpW7%X_}O>,tyr3h XR }9K1snĒRM, k,AR:͒XLABʔ}&-UUѶӣJ84R0?^|^5G`ST7CtW[|"?aWŵϗ/Ae*^X`qGzi~?7[ gL҈f:yb y96Y ׭loI]!PEM{솶MnhP'Zd2MSõO+eՖIyO^O6hWfM@w55(+jݠqϦp7AS߁;$`A -}j9tu0YM/O{d)W={TZ|@YIn48 rVI1t<ܗDgb#]H{ m}|'6i*2&l[S[A^nxq_P{Z1]PM0]KUt_>dE)20/O._>LW-rݩV^4@Bt]!f2ZGP^+m2{-!H,kH-IV CF5֞kk$?Avoh=>ىg/[}s(zA^p݁C8$aJy]Gq감 *ŰXWy>nyܒD 3PJdg(@83dMڡu@Fy\)r˙^9K"ž=APKUJJ;n˭ӁHVZ $^*E9:$[0 V Ju,n@Jس= S)m,*ʬg54:Zr) L`k0&H%ž+1f*g 笱&7A(/ "qa0bnzKE*Mx 3]M 7YaRK--دjơ沁IJI{{3*ȓ:8 }ӛA~o`!l\,Z^MMznmeIgіdZk+-v 颔QʢK(RtLs*YU{ɧ-]4 T*r4\H, sqClfw8V^m*UJyƄv6A R8=b ߐ2X豈[=wXY ,Ғ(LK{CnpW& ⿻ZUoȑ$.T1pFh 1j^)bGv< c, 7^T?ŗkF.6.7<~SǏ>XTxV0{}8&\~:R₂:'D7׳/1E Y~7z8yvx.}?=P6ȶBR\sB|-2V( o^>\N,9skVxy ESܷ[`jJEWUߨR]VtZ`=|H=UWU-=;Ux5 7j./Qi U FT;c"EzR V V\ [`0^n]\S?זhSI'UHKn7R A/F)k%5a@AyR [kJXN}3B9(+/}!|QD9UjUXtȉhGs J±M xpY.EK*ez(vU[5ز߃.T7(GQsÕR5 ޗcs`5ȳwvpB{M;| '+-W֙kGgB*Zw:v~9pXFFtoY@8}%rP+LX b Yr/<6dHEOvFD\'if ձxorS 14T#ЊF>%-8BW}孫λuȘTxw zlS9Fj;7&~~75nw˾bޠRƵ N*p~2Fvr+{o_禡$zQ@&T܊9My$z`+""M,pv#&5d` *8vPTvC*m1dzcfyR"w'or8iJYוހr-|z87yD5=HVV4e3 ]Z{aғ0 +''_Þk94<ְ1TiÈmy[Wn|i$Xߨ)%1*9[VM8Q/I{G1ŀ1wROwLd` Zѕ= -V)' `I 㶸}:;>ږwˏ}g-3GFavȤ29k+0Bɒğ1M\bcixEOPVG8vpuއy(BKyO9A'HQ)Mʋ* _\$ȱHɶ S}dHو=d+`_EO&TP=t@dG;dڇ Y8e+pW߃0NoIEI݊~@tb<6R@:ktWI(VE} ?S)`࢑|Ky_&(sjj_1_|_A#qͬ|85Mr;\-WCUC 6C.uA R \~!|B(Oh)Tяq>__>߇T'.;ʟt943G /_h:VZm#Xyw_/+c%6[׷7AE;:3Gl}2nͬ2< A,WO}ZܼoU.ϐJgH;QjO[ *o,E:fGd*m@7Zj ~ CCQ< ~wAjky4Cu[)ԧq7]@,lU"PBKvlDY(NEMk~F~kGDFAg x4lKr .W"Ip7?QcAW8JӮ bP"II_G+c}I {\ZjN8D4zMd\Pl;~˘A B iOui/$AOl} jRƩxgZW:eLlaEA!GѦyNt*rP?A:(ޡ$uԹR*J); `USYA"G V9f̔RA坶"<١tF妃xða{5krJ$ĂssNA8DrʸwDxbVy˼"hp9 TˈU)#Yb4FVHOlð.9a^aU- P +m#+?\'=<cI.%fDɔM,D-̾~WG(7 _U2eB#11 XqC,1A`V!6J"NfD¸&& i(T +*ހ6KKaX:M6Ӹd&Jnsa@6 `t"6^06 fXPJ 9$$0A 깧|Eʜ-E )80sF v +iH0.-G*]L8Ք7,uY0- `ڐܩ c@haB*4`OP0Or瀲b #:1±FsyTs2r'xHL]ó, iV4VBYhx1qdQ2<~k.^crs36Co[@_S^H:>_Bt&' `}XD]<[n3;t(( &YYNg8u* XURzTG)L J* & CĔb@aY``2Ŵ7V #6H}4`1 IWmo 3sx*w' Ȁ11 .a܃S 9Q1Ah0(;l#Ke*g dzcT#w䅧6 ƌRlR}?1$u ,W+hp1.RQlyFWsX %GwL<)+y~c6NNߟ6[BIL7Lu}NVSv &M qzo~DٸX b&#JtW7vf]?Jpj*B#o\0bW*vԜF=8  # PB N H LV*T4TH$& T_t[l-vKL&Y~W77ZIuco;$b?/5%yC; 3ޠn̵N'dgP=a-:qSk4Hk?3kEmGG#{]gd/f&A萰S |p}֌~νvÞOC-hoo " {ܳ+\I Z*S@38_;eKSt~3[=%ߧzpMQ$uهӷ>rw@$u0.)G4H.j/RP`<nڿ'߿N}$"-&{@;07G^:Wf?7X>0oF~ clvF&=Z>9 thopr&{w~x'g`k}j۾c ?-vW1.'pzJR؞p()ɶ{R&0__x|ϗA/_vt=oAxb7d![F}.Y=w? 翹UvȻWs/8'csü҅fhW.^_6F:}' 5٫4U / ap o|ӫ^ou44l=Sd蛃~Tov5S/ qdo1TwgNOE0'ܧV2VȾ߻eldɟ=ܤ̗qwN]pZ6Ke1Eu );>ho g{wx0nMi,&!ABNAc7tqf˲Vhχ5q$qfcfe0CN8D‚20iS(2 `Š@xCA7+u@,dmkKQ|FzJ1S7z!y|u4RΤ+<ꏓx6pD3$!Uc9g;?YKg*˙ S| !g\M”Hh ) Z,hʓeѓE9@ISVU(])hR \I& ́. uZU=;oa@tYXIAcm:{*1 q"=pE7/bXc` QG QA@q*E"QcpK sA0H9¸9EjxkS"/R[:hT`;fTP48vA0Vp<# .( &]b* Q,cɀ9&HQY-ɶ7&7)nO:E"dN~e^]SAq^7k9^uhs\? wQ2*oz67B,hfp6C=bhÌGStaAirwin^% ؒolMncxț)|°<(g<ȿN%NOLbmԼYUVb6V}FUMp?a d{*PF"J\wS~.;8rap%ՇLKug|OGH`X`X(5/ARuE/ŁvwumPv *A,$_O]{73YnC5WR"DJ 7XFyqiR\1E!!3ċ 2wʐR!MZ!ñS5B.)/UFҹ(uKE (V+T^ [ x{ZqZTBqgZ ozi׍ D/κI97TcEts(ٚ՟j7M5L7 VT +P.;+W__B0D+GeE\牐+~88KsDD+.A{dLDV\8]!%|=fQac~_n=檋 mII {q{rJ|R*L~pIiq#E5ohcJS B q<7Z/}A1\DvLKF*F*4CfWn.229$7+1bppNi4&*c\6#D]4dl$v;O:/2VfS #0H3fiE~{;׿=Bn\4H0ıbA"ݸ]#p'~ };wfҘoSu? }>"}J͉o35=F|,8 RmݝgIwvA)ym.ĊB .-), |DpaTaΣZ<4%T`VxƈQBI"#0eФx<4^9B(\S`Rxp Q y,3Ybns=`àCݢg=EaAhkp\) Dk͸#,cܬ,Aa7\n.^jӒth%jw\l>p)הbarw93֘r w3֘R2b4W^p"wż}߀iwS*Zv*hKKv6+TBAR8N譂j[Ä eD OC)-̹φr/^hKf[iY=uaD7ljњjbXYcQNƐr" GO6c 6&Pl@qY a;PwB)TX(F0 Tb$a"xRjXSQ`{dJk }F?¬ɎNcL {#{>- &a~ rJBt6a ܸ(DkGMf.SGLV,.g~% -c,wV8:^ 5e5 6#<n H>mq,pk 1VúEQ4۴ K`δF]X鐭<,h٨YHeCr1vdw22kUl:\*.hVyR2Z<=ZW֬=(IqQ˔]UiwUB4KZR'Qn=az^XN+FUp?8{,ơfڭ@dmv<* 7]O¼S 5{1FV֖1#<`34+f V:QWRe  &8HǸ0A66I1qP9#8SkDQ0Cl`t %S+HM{bۢ% D1*dw9HGHt0MO<,c av1A^dgf?5yH7;?OcE{k|:t7]xFī לDVf'Rsơ0NbL4#+=5hU"%NJ+e IaЁ~獷`bH>T<1CFD>>0Q,0aC&10M5%1)&!ڟˢiCEZwVJ̆'G}-:$FrR@)X4 HfeLb[ XGM@ 0bd4qg >?2If&k^݂a!cb384!SL7p ߼2A0n܁䰎J%TU&wUL*JVXDN*iƝҖ5I7[e?x/(!^1ЃCmèW2I z@]H` b\_2 L$|.7q&D%W%$ғILO(AUY ̚fqx边emj[U^A6%04udcH|*%[`\h,2e0=Ԍl> enzyX\d l`X2CQQ*vR%akIZ)Wh%$i֐vKd)xoB@~ʾm\sgW۸Jkj^URvXW:4Os qK$-.J*,I nK5X^[qGl5_8v:*8|L|A1E / %{^L @BG\c,1r3aL8qyGR3"$T!͍}O58,tbR+Qd$5q C 39xǍCSdw&*J2Dw[Ԕ:Bu Kx%:^Hµ Vܰ*Rf.TRQ,tPoT/2fء+T!e{颰f~QM(0_ ZJ`CԖOm⽚$^hFkߑN`֖4NEZs/*#A]EN7[eGY۷X>TsەL. BuI)LdEwM72 |e2^ODWCED=Ro61þv)onobt KuBs#Nn#h4<0?.+h*DrQ5M0mÔq ]!QIQaۢ:3_M?Mj 40yؓG\d%M<p^l'ۯ{I'gĞyRœ!!iLf0cf6 k0,xb,v$HBu un7Z9;*c˜ӭԭ(!Y+#kV*Q{.!OG\z)Ԁ2ƭ"IVRɀxJ,q;cNʱD(,9qbCFi,25B[}7SCל'p4/DJ|V#8vWp\X Mi\s(:.WjU?tc+n>f06˷wشH"CƄ5y|wt8t8Zxqd8mgMǍMA:|M;Mquat:.ahXiiZ:yL%>>d ^ Ywb|/F3pΟYm"إʮ/)fKB,4`"Z>n]x`?͇k70Td}Πx"fp'pt1}՛_޼M>k!(ͧ[:J>:#;={%u;;>$o|t"#ҫtK"Qhn6Lg"W|1![W WY- +VPT^ %q#?9f?0Y«w8Z:1tp$7X;JM/ :mSdސ %iR0uJ [ݚAR*/׊Scv4\zf.o쾭D4*6_L4h/eO_S*NŇ?w^fXU^P'Zl d?dhDOK6Ӭ_޼~6||׷q7ɥHF/nt1]:|'p/^d xw:hENMxE<h9x{-=͛脉.Du]ΥW4Cs^ y^(v h`C=E_Iɏ"w5Yb70g {3O̶?s2ir?ܯ?WaңDk$E@xN,P8A]˨Žxаt8 RhIB,VF(5DEBŠxgL10{,* #$Ȫ!( of81$] ]gq~c| p^Qb̯_"/wyXYؒܘH5V4)MaBEd#*4$!pj(ji 㽆ô/D"-)t4mymT(B^r^"- b?{W#ɍJc^ އX {#TUݲd进u̬,uamLv2 2AF|Eg0}dB(&m b259bRD"GLy8t!F!jbhoԤ&W޻V$^9 5eMSG!V u o %K6,1PvtgiJHmw ř{Gwo ţF?yFǫo~ɣ4}8?2vjLW)= n#\`p{WKqqFL8闻'O^C^t`UϮRۀ#䋣)NfskB(c+x`!{Kߜ~T\5;H$$0q6Ŭ8vmwA Ŏ FfX0;$6l9w@[#Śz+?$ ħb,SwiY)my&{'Pdz}9 $ ;b9GB#HO䇃si$lA@NiDry|6 m(oqRBTQ(wK) KFAM ZL RqjD5$ [O`d܍]B[tPƙ CDYJ@M ҙ(3dOP>h 'Jͤ0Xh0f5@Snq\Tg7ti (j,&@?J>~ |UwyZ6|n%^*BI WnX VIt'eeIl*pE% \3~izMm QbUk?(> 2p!!I &/^HɤSb*(42mk)MOuK4,m(MSw&gPA -^W nv9.+[$^"zOQ:jYkRj%a+ Q!! YNk`^\arHqu2j?"+Կ= j,5E~#t~OYN[\BJNx OǏIHͦ}Ej63;  ,iV9oZn8HZS/3䆖2Ec (1v2G.dyUc]MthܤXűYBsdh{OO~5I*?>%&#H\ wZB!H7>-􎨄=9R5\*eR `+oHRT~#xj8y< JϓXXs Z!QkA1I$j2-5I)GƸR>ճuu$cgMYBz9 mq4S)'@P>4w0V`T#fr']sNb Ӄ/$ Ak[uR ƕhbi1bJ U:_jʍ”)7 SnOƵ#h!` 3q2!k$k#FkHi;P+7s K;)4%maKW]|KH!CŮfs/>@J 叙c=_iC2\j6{2<LjE&tJcrT7 oPyAH~qXQXE,8V֖ +X>X'ƫ"nbj2KwsdkWYjz~uҤ g׮A/.t Yq:f[h>7ƿ;5HR(pW7$ rRu"H'7*C PuF Ltu6o߼L} 7$޲t*槣)s} df1<}9t_ ~(Q]>%DŽ0W0ZL7 <:j=O^|Ό?^ٗ?||'r:5rO\k^0(cšu3acR֚a\LM:2C\D]m .A ? ;e@zGD%r"dPH :,qł eZD)E66)ї;%dFX"Je5)'א`H@Fjhx+YSڶIk៭ok\.;X( A`Ғ3Vk &xI9~?tx[z.Xj )Nk}f|@-ZrrE 2&Cdehd\=FIZ*"9, B,eF#)I$-c̐cƠ Ыy#C*¥w Tޥu3N1ˋHe$;+'lz5 *d兌U(86!y[\ed a-1paEOz t;5j[;CTI Sv ߤp gG!fFH!8Vhf+}E R7GG=<"g  or[. sQKgU%*zEP^c׳FLl w~mR,pRNR{3ܣI9TC$SL eD^s_ZP5 h X -g2r%H?9bZAtp}{>tB1?|=q|@sh|Ae㡗 _wn~zq&2Yi,6 V mIc8^lyr+ZJjJJ(5"񽞉YB@gN[LƓZҷ=I cА 28FQ~li}V}^9ʌ'iG4ug\~|(j_>doهLC[6]GnX|M;'_|MzXЧZ|KZGUDg I!]_k{"L(\D IaQxw w 64n (m3xbXRSt>UAZ9z_E믿wPT`6)&3Dbq-1[vJj j"CAj-:jۻσ ,6O`Ѫ |oͥG=n\j- -G,!dTFftRs?Rm8AHY!j2}ړGu5%!ì~if+ѳZ6䆜 ;z0 *FMt:q^6:.N/UAb+D$b=*&_}.mȅKoAlXOzڨiC5j(撹̷H1 x[J3q9 ɕQ0^Rb棒oB:Eoy8z&Ϗw*cT>|a]]aܱ9dsICQ:c \ /)4 9YYaDEa TFB\3}#cbWdr+޼"QOR4p"n3pBA䱤 St#cc&93)k.ז:MOS`?u>\eM2/ݒn`H<*7m(&R|ƭqӃwo=1CF73 V xh޹HԋG:];3>B! pmBmHnsUb@aT߱cᔥmvD=%Šⰾp8d1_<@2Eců3QHQJ PE!60Mr a: Pcb!( a"!~ݜsV @s5CZ -57#>IC^L#$\2\ID"V̵Dsh2qqeyJ _T-qU*)-8Y˻J(r bdY5U=u5}m.\x KN㧙s뇖"L8=}ID*I]YRCc7 &Ky"[:ifTjn>l.;K|;c3x IhJ7&;1,6gˮ.ZJ8 mp2'TJ3}mB̝8ĝNNk]=>z,7dɋ?X~>:J*vgmE}{%x(IIpg'jJ9EBa2%K![ЉԲ8붺 !`.EްNE']{'4&j'u#5$P'+d$!&X()(d-^hH:S-ALmWDs"$x u&!%*$%qJ1cL[0#E1CX5YŠĊP*8 /(#VL צa[s^сcbbכֿ᷍SDQ%uJҩN8bP ec~y_,4-,FG߂ǐwu%?cqD4#K&1y0|즣0K3ӬtN褝ɧ(a9cӻWݫh+")z{;2]Mڍ _^*_{&gb5ifxMڽz ,8fv? c97&ğioIA0i9ʠMNٔ&) uqT,ϩOSh ?5%u:͢1&O9+Si+D426_qlzG;@+O| P쟻2m;ōޟ޼91:ߍ?OWަp|EL/j9Fuݾ|I@A9^C;`Xb:}߼'"^IE6hVΊkͬ]|y拙 3x=evYG qn?_*>Lߗ $S,'37{79Rxi خO2+1Ol9xGQ)ewGeY` Â)ʀeR6_1K,RpCy+s}Zf=ydwMMr͗k WUa5PC)ZQ|!ídx9ylŰݷ # e!ǟaÊnd7ݦ.xA^\s;OIRXyB+FzA,]g>o=t] l\Tus23_qN9)@l4hLHy`L'2kΙ}0Ouc^yB`a0o:n-W9W^JB8D^~v~ѳglG|x9qC ^WS&`ǫm¡P}8*8C~-n7+U`kT`tkpHE(E6xu&Иk Ǔ8-:qr^}bP9%,9VCNlX0ok F.ѯd1[뚻0DFfzP\EWG 1 Q`KMX;_Kh5B>ϱ!.b< %ZKKF 9#GQega ɱe<'@+bCPbKa,8"0K$鰓Rb.q^,tn" @N8)ǏAucwޕŎMĕAΧI4D62F1A$@`aQ A(bUV|l<M?ޤq`d'u@^uoh}#gP,mP̾ͬmҎy(ƖBW>yLyHoz\Cgv:XmQ7ƽI7fP.r"#27+LRH>dWu ]7e6eJ ä "dZ Hw}^MMP CMDD cLm-#9!AaqX/c5.ꗒ%5?A5vibMJ,NK%4hWh({͑AXq^dd :k2_L7[]q?A=m0I4T%Xh.v^\`U`d㓋l"ظ מQxt(Myj9*G-G'8d3*L=ᕛa7+<J[9M3 Җ 'Ʒ-Q&8{Za̳]wOmJ_9MdI~e.;m\<8v4d`z),~Z oUJ2`jHtsߟʯbÚ2^C0Gwc!5 =̩̯B8Bɮ]72kdϓ /F5, C1-&^NPgsĂ2kȃ5eL< rT.U51&s.C:HGяGC8ɋIz=øY1OzaR5H{jn'!Jἶ.cnwZrd|XĨ_ދrRy/Rr[Wq !!cnnމ8WN*łryWkI7\/9Ikѹ U>I޵ͧ 5bj.{}:5bBcPժS :2j  4VaZͿ.:BqTaqmԱ|E';dHbwE9(=GMϏsaCQvy1اrlCѐɳQDzb7}9s+ P^:RlG?6,y|(%9Qo{:fZy=д5҂:N$;:xh}b]}Nz0{D(I老ՠGiE#dP'thogo| %9$3I+%a,h2/tniYz;r UL.Tngk]˂P:ѵS׉ε22x*yv.SW')<ʾCV{n 7R TIda7 ir٘ Qr2HQ6I @=J!]T,[psjr!)JҡƎ6'b\ɜ7 `鬌_| Rج/K{8ݽ@V&Ǥ^g6735ϫxrrAbS%ϲjh=hPOiލV[v,>D}? JdqU hcG99$G,mT3,Y4m3xD'2ظdDcX]{# _Cǜ!axF-YLhzʹNJT{4B3{0u&)v|ؿAsЍ@O^CYӟEMVx' pqw4> ƥ$h] KV6Aj.\i<ԧw΂4f0Vo[0w5a(]]<%s nq]bmn]tT81"6hކrR,0ST4ھRV4r~;pAd Tz*WSuZts_ Xr9!|5){izpjQƈ/>rэ U\u {YˡS?PKa2{陼^4?pu L*66ãǙ"G]VQ 1|vc:P$R0j^o"|ag"#0,q9v) l5*lVvh:T[ 9ۄ0qdGW?Yi]X\Q:@Ƶ~{@ ! <T(vi"2<ΩGp1}t4B$0jeVOt]WBŁWZ.aYG5.I5F:͝V2HTP bMzY{Kdqh&c6}yjAI`P>kpzزIW0C0CiZYQef,:6aZ5+\gUWծJXS:ԅ4\Pc,o552%&>3UН`R=q*3$^9 Z9@ ϰ72QwdmzomOAJ%_ޑX P-Ėu0NYe^SF!3Ы^ ;2A8Wegv&vGI->vG&|[K:vJ%J1J^E*U7-7AYj2JPD0U *7ǺƷWe&ŭowlB&`}k-n}Kly@ 딶\Beߊg 6B3.w(#dL®aXapAK<ׇmbphW3Ka~w`~JŽ¡nRx4aaZ/p]#3yʺNkS* _]iRM?b"/4\0!آojZ!ɆNz5cZ;f~Tʫr:s9/MyA]͂ 1 1}apo}ō2Xnw$!f:EL_ { neQ&Cw=T." kU*D9oerhTh]-N ^nLD'|GEv+ZJNh*:-V1Nqt!FI;HXpnҹ^`I>$-V?Yre``\0pQ /krfHjHR'<d0NKY f`\e4q X-gR\aB$~'ɂ=cTE>ϵ(RW\:u= %$;?ꭡ)!԰V!A(+J@d RRq?nF44"ؤ>{񛑓]er|1WMO '`ٶ -1/GU  cc7n'O'`a GکM5 S"13Wj 2da-U4ZU(%oo̫ib[m7)=/ÃX`p\ 0Bv\UiW"㚪~¥@_?5]+ #|Fg_?q\U ֔m& 2.qe_> 궄RpXĄfkn;.ېoګ*z2pf]iKx!4^$IJ(VCP\4U&qgW`JR Wq(#e7Bڢ FMkO_te.KJN,*Y:-Rѥdɰ& {DR.9|1_:#> q(e]T˥nOZc8i)U%߬ X jNLy!l,$ O,>s}+Ѐa?012M7p,O%K3!3 ˲\AC0Ȕk8B\3,FJFyNWp7]F\t7ct6IL MWqE2dB**iR.C;Mn -Yy 3&/"]e!)$ )@[ Sb+p %p4,% $[q5̗CEBBF)HjtBl+ ɟs0U Cm_z 3#@.$d2K+mŸ((3|ܧm)iB^m q:vUD40KYo45*jZo7CŨ12ۗUVR%2/LfZNDԧФ d &Kdra̶XH$5 FƩzmΆj~4 /sHP}/{o{cu/n1(ϒ>=?RSYڽFRy҉e s9!Kh4fSxnr Xڼ-;®7 Ӣόa3G93I«|af 3 })d}_$&n8Β|?G~%wal8礨wK}TQNJ `aCӡݿ%ij>ZtJID ݒ~xwH ݑ^>f̥N{שYZײ E9xtmTu"嶺O](.&bm`q^y*Lܶ?L]$ A.3JPi24IG)ЫяXN}SpIUa0;Es"/'6mx2'xM5IJ(Hc3dx߱Ww/ oPuz kfGmcx3S{lONaXi/v ƥ$h] Kѥ]Vs̀b& +}?{Fr_!%w]!X;"w?B?mJx7~CJQ!)%jS]U]]eR`ZW:jz!B̆8j9ԴefP'+K.:mH^EW"h#q%{^ISqj[wJ %Ku#vCI[]*=UiGT/T/r=SgjL35zF=SgjLLҶbL3 4*o 01=+dv$j#v+[,Mi?i ]i60/ 2(QmL%>(gԲrⴵO.oq!حƮz-{vyUQBynZXt҄P@W'.|"ݶ#w|ʻOMNI8'UOGYzrފxyu$aEEB)HMkּHz2O>+pyevZV'Z/2pw(W K.$*TMVoA;pCnjT #D.ˢ׋,zc}@]G|2@VZʎ2+穢 ^'O-֍[]Oo>0kyED g''WG0Esc{3{gC{lb=qGl+syԬ:JӅHLo/)9Wevd=6kdvvL˒ny /B6䱉6f, ,a75^chxhQjtֽҾ]5k/m i.;坓e9hzD$PJm`ּ !tjOvN.]I2m^n,[HTHL 7&7+I&6=R;W}6#W@SmPG6wKt|rQҏۡONd[(poBe܏riM(s4 ԛ?޲g.y5t=;RùOJL}9U uǓ;$ݢF-K`B͝"+SbaM`B /uq&͉3;"[n 6-jHv7 w%X Cp(E '5]Ivt1m9x4KCD*3tϧfT22Y+3!19݋.2`:#r2$iP_ie6J@!^9Sɡf /C|)^,j뀅CTB c| *RsL\zK "Ę`IU])#]\Ռp Oï E}KCb,ī)bZwKSR ȎxyIԙh'oJ M7:GV?[:BGIUC+gQ''oȑ:`cwq=zXb%d,Gdde!~uzJkZN)B3#G'XC&ɿK=wU.EEatuzaևFci}7{PQo%׋K2PX!5?nDBL?bEOk`cOd6f<ۃ; adjMAʛҁ _`.`` bE'C[hM 櫓߾)NxwxxiƷO疣Ļ8atvO]}8u}]o;%A-ģ٭LtLK֘A/e M4{GJԦ~zs0ou_-^U >y7/n5̒dcM@׬kfy|&DF`qv$E~d5+leZ7/LLrhM-' A=ym9l@[2, YzYѾÛYa+dL06w{~9Mߓ=M=ɪk̞G*v]uH`=S,캑.Zحy;Р!a/흁iD1yNUFWT*ٺPu)0!r:+ J:9ԨǸˎR2y Z "ᘦִ6.B,abZTXX/% n=|8vB?*v»9,l!rsO.y=cKm6ߜ" OOJfQ r)zʮpzrpA9ݡ|i OTy ,IE~,?'w:Greʴ-'a⻫4y sܞf+u9^OO}tǣkv|d_@-C]SkB{-V?w*!`e-\~nT\`'̈5bh,nH*Hg^:xQLAȱj0r #7ELWz#,;5>u,l'[gX /NK-Gn#5+ )Ƙ-&/.$+ϟ?{t!R*o&Xk")j ,j'mjVXRX9US57ZkJ3<]4a݃(1F3_p4+;O `R2 FbWs˸A95"ݫ'qV{ W1ٍ".ИOmolq_o⾤!Ótr;m!qxn8xzz8h +HQx$ADDd+b哓e9cQ8OAmnA["fvB ٍ`N6߂ZtG.|B`MMP (G!и=s0>޸sc ml}~"iϺriV_4QPӵA"UZO,#%(xnvVyfJ|~}‹(뎒e@ta"mՌ45_<,|yX˗謰ot. 2L2/ ;@kD mn"ў;- n޷x~=`i[M6E'a`@:լS6b?\\~J%lGq7ffÙT\#qLy@ I[. RF-Aч,)bYJdif]5X_dZ{2UYCj^" uwzaTbѐȘ§:ҏoLRKM,Bf*p eFm 1mQ*BH #s*e2+װIivyiDOr琅|R;j_sAR%DhZ%%kk TDg29,uI5ox|YHd4>]88כUZ/\V<٢U$Z5UEIK|"g㭿=^5_5O*eH&`E uH#Ha~FKDmYaaš]FmV"gQڠaz.ʗل[ :j3E6C_~;fFݙ9S\#t1?s*k]*2RPtQ*QFntg08ZO7ܰryƜ.rN!!&QAE`sf h=IUPKi Ziȳv,iyЅm@ppBI샘Ì4H:E $G_49DvX&}yC@q t`X$Yp \,GtJ͡vrfPNZ,!R^2l0S2M2+VDMԁ=Jx.HӚ+N8;sRARjC߉ [HCWGN7 +Ӆt,7-I'I bBO"^hD41Z[৓fjE#pv"*t&$!+ހgSUmBI  d3L)eJv[ nѻjQN&}>FU֎ Um8Jv r jJ5kƶ byf}}u`KMeNv{. 6=3Ck{z -BfewmL!/}T։U 3\S$×}39D Rx>@ եr#ȶΎvyS[HS+w4w&b(Ť>c3;E^+hÄWch[%sߋ4bT? | D M'y.T5JƟPHa˝ʽM(BHY;͇AAv+-!؆PkZ RU'0uʴ~|ֿb|4(\""b.q쌒"a~)ff橈S-iI7#EZ!C?'53ĊֈG6ƲИ0D`cjf^r5&ű $L̇P u"aa&VENsO0YS!KS;Xja^-fR8NPcaXE iŠpɘqa3C BnЗA]loxԊ6;v\ΰRNb)`ە[dRBXv:pk;/`xPbCC*_iǰ4V-|u䦇(fj!]r^1+wɨbx92 cU;-nbx)BW ߩi*Spx{&t7&.l?Xm M@R +]oYr9bHly^, oUJ>H}<|#~*=xP*dVaɺVޗ`^kA]+o6Y&ʲuQ4'%z3Q{wKy ۥlxBRv|)\] \NRvݷKpUңa݉xJPUns8lkcpNѶ=d{ X>c%kzH&7=DRI@Z&nw ? ;Aww:VB[J~O6ӱεx?uk{JX+;5% G}7%ۀ28\5;-blMœ!ֶN mmz!I}cUZ<[Lcvnjh};W XlbJbjN (1@RB UcZhl Wjj^}yAj'RԝS-ld,ABGv:X8IP} 9D@.%/sHfw8yc{tؿBc$n:e`1$y/kc]IMcY7Jmqo~mؾOW>= > } 뵡Oй^ih?;ֺ/'97OBRrq sD7=UCaS×~M!VLS@~Oxt2N/oy6|?j?{o8ygn^chvFf~6j.lSPr~ev;wO+QvE3(4w>5k7eV7()θ!}~m'xO}e˿l^MIw϶۽NŗK&Q瓛=7Y{cZt4.7F`LƬejʠMNٌ:);QsH0?kjs M}I)]y/-dFoj`}z"dƣyf(Cߛ/Cjҧ^h&kv͍~חݸY|`5<5gn^%Z ۿq.N-k^S; f]j)?{h^wzcoɉxuъ Xb/:K&]/w߯3 Xt<.hiΖїn'>~6_G+?;\4oS30S͙yӥً`ƿ|xMxjrV^Mzg70 ^o'4_]g_uϳ\ͧrXz>OYs@I6wlH 0 8vig*׳lx?޼`bwe/q_x% */u Cm N.?{,UQ>Qr :n{(Is_r.cdz?] 灱ǻ"琝_>@/&xOoV['&mn~Ty E1;mSP;F{[-j(70&Q1hkdS$at %%Ri*VFƉ%4aN tsa>0a(1JO\HD $J(F%'7@ Z*P5Z$~S:Lcr0,&$R b9IQ8SYGߜjAib  w,0 G @G (\aWā먾X#}%I eDx̢QT+oZ7sB/;#$g"Q(&j !q?6~cRD[TZ-'DJ%#(6e6Uca"NrHk%#z70,]80f(Ne1DN"vF0Z|HKAXʰk9!rxRFI&qeRbew'i!JY]K淶zƩ y۪~GɣER`T{Y)]o+ȤDXQX Cuga`*y?19S  j16R[C{ao$<ٷǘ>J"y`|1@)[B~L˚,tJY9 NF@1Sh#NNabb|Dsv{g]\>Fףn(k<"!쬊HaɰbXFJ0ͥv6vf aс#KN0?߃fT F{!G] 3!ԊYr-2uoo!FIu87F0CQdb `RLh06]g0|JM+M@WЉx#Uw:ȣǹW oܸaFuvoLU NU [mFJJUcQ{)˕*Del]B}ϒzKݞϓzKW3EMP)ΔB}:(s+6Z ̕1}@D*s4w*2PWdCs8!R9yY*S&~7-OOu%Sc_U$ 7y$}}q%v4[4npA0"+|sy~pݬ% RY#ןnI8COl(SΎqxI$B(e1LdM,48Zj_P+iYC Ф*\ٺ&,bQ! ԷI;MwԖ760-k*ĸ~@sXݫ{wR/l<nIn=j6t 4 N8}A.o?)4Ȼx:tW>'8{=xrBQ|f74p42(}r(~uzч|5'>ڀ#}Hn\?7LGŠ D'f|xLt1R-: ZhWI2E0z0J'TND[櫌W 3~ڷ "A\睭(XRzV;}^K@*\߻P!`\ys&]QQ֙(rA|hA ^;3<]׈ Ͻ$W""M%hs5[~[}f#yihYAs"af[\m_e:Å -Z[\N3hj|XN{}j]{&4/8Ȯ3VƢ0vmUW^=s(_pZEఋeOPg)c^ȷߔmL|`ZFMrêUaN3>Ȟ[ #}6~D A) 2:|]|j"xXwxI d wӌw=# ?MIea{vO= `X.{`="v}i1h|Ef E-4OT#3JKao4Maåistˋ rۑp3<}B,.M,M)IPaWiʴ"ay߅'D x؄KVhYC=hF#Gq+=P]H ?hWrqj ڏnP-"<=X?Djpuҁ_T &U0+G F12VYx<=)ظ"w'O>+|9}W9p>GiepC kllL i A:%M딷T?tK/ula9/_❝G_yiqw_8qSp)W9tQJ"tGY#Ai,2T8"rL!,g yl9&`jה` P^|n(fPaJ8Xj@j1{| eQkqbZ8 ĭdqc[.49TY"Kj6!9_'pb`?-z1_"3 H9;kާYK 9ݲ:x8n"I%T}DMjyAW՚okϼݞuvKٜ3ͱd PW77rxJ&Jvc8= ,WNAm#?~\F~4[7ǯ-+bl-`2Y,wDu)TEy89AK6"~7?6Teԕo<Oficllz1>t\7K u:0vډ[yԃ9C;xR5<}Esپ2LeR)y;Jh~\kϢKZRAEA >;^GDIUW W*эgè8I!e2HHWdd︍mW.9vFƛrE>&+(OADm*װ !,5vw3!|Z KXsHeP&8mB9 \H1)e+aS9Ig 7xF4V` J -~@WڶGԊmqwM MC3/^cWXa1b^!}HշU}^ue{9+9Sf0ZSKy8+UP99rbFozmг2r uz썡Ԃ;l|)& Ïl9֡&5~l;%XƎWwg5T8 t| ` ȔjRՎwZj*{נr*9} yE6}^]~wͳEqaO Jsttw_Og$!N?Lzf^:{٫z7wz4€wנ˟րa4ƿ^,szl-A6\dwf=I=(iD A<%UׯY]n^mج3]~oU"*aOWS?l hP]]s?/ J͡?&5׺L|j!RO(5m SOF.dZxxtzom'W?^-$gwoD ,ByƀƲ/4ɂe:cmK,צ@[rׁjY„Bhrd,(tJ# eRȵY1]x{Ί. -SZB sV#N==]XVxYx.J؟j- ` DYR Ha<ʘe.laSi:PFkTAƌ.pakY|L#])NEi,<4Ֆ߳,ˋ \5a{(XʰM\X582-eܺb ri~T{,!HWHigߵ81e) Q(AYux N:;ܛRz.ls_2-YQ6^͈O} g;U]Nl ,~g\Sovd*Er—*jt2_)z CWE&cMH"Q4D X n]}4 12-^=]62_}2Yٻo}xR?(2,f ,8S^ԯ;踃E/l4s~9rҠm-gq@Vh|]ó4or>}'-jۍj7)W'8z^ŧaIA*U>U"o!GWekFMyY//=R]^Ihs1@8Y8m,c^Ȋ((skWA6ڟ._G찘HNcUfu FF yO נQˍy+i ZSa}J(:FugoQ4t|k;v3wٗ-8tF;bo,ߤ:dߛ/VV:POI~sG[H,Y$NWQTo\A=EK/?[_.N0uߣ) ?u^hdބem]Y0.((4܊[`K-1q8< zu[楋rjVk [ ª#qn]KTqq%,H6>>Jd!/\d&z\v!tK Jn*2ƭF=sG[|=@AsځR(Ofij -q`h)Xc^}9i ih0CiK(<~J_GPV3"1g r5 97'64J79e/a`[]zd3 6C9퍵yΌBΨ< :[qn T19cy.HWxTƤ48^yC1KBZ[;Jms-FNkJz}zڱ@sH?:1Pnt%yh{TA%*ɨy*ɠ *ZMR"G|ZEF[ {*2l$u{ ĊsnZ!4 iBc^Nǁ{JK [پHLJ0fTp 6cVk~m76i\?/׷,߿?oS m˄=^s"^t+w5yWJ>#n_xm7T[q|Jiߵ'n\<p@C0ۍhalF> QLUvE1>pk*{r;Y~8\Y|/=Q\c%m'_>Ԋ?yًZ2*98FQTbk-I|C-A Bd[%b{ϿWfT4:&7QH>~NKƅ+vo~qyI6H󏩉Xsj!96hm=?8}euE7)8kI^Vڶnv=nׅgasCl bpjtB%jy,VH^mT1Ne(((NFsX+1$ϥLT`de!*!j,jQ$+K0S:'dbX⊜gh7Xye40s/!=h9'9As/9A {qD4bs/ q=,AbD DSԛl/lLYF/S2 y\Qȗq޾ dix#xF!J:A {?-cATFrC)EN)jYN)E#׎vRt\]ZTR4씢6ji4 5sNb:!`T{JI&ex@ʰEI&e[B [xBG< F {!apX00]t7/֩98Vpx:-$LJ0U @l <6XRD(l J)MM2MMpj O<*1,ԔVSxAʫD_1*>;hYK,gT|F{SysOg'ܷ;T|6ⳀHDTT>pD)JvTHe/c R;j}m jMyzG)O} .j=D.D)t`)j)J*=r 5FS[U95FQo0Abݜ.B/UPCmBR^ڑ׎Z׎vԺ^6Gb u[0;U͗("]&&a%#OQo]%MOQUjΨY{%}U-Rs_jJ1$bQaHMGMi ޵5q#ݿn2)/҃rn/هoS`,&")a4j.[">8hFPY.ఢ("X0zDu,HN/[mLy0%΅FA%L ;_2>v]nie-#anAnal^+ F"|7nxDaD}NE$8([@E$0x0TЪCN@,Kh+Ps5#JI/Oj hŃ)˒0rՂ`Hy0@4+9Re#Fzcc-_H # \ ‚jHS )̯Xۭa)KHSx||L)zc<&eS"Xf, 6Nl2SVsqR7S"V)0!cIi"L&pcbrQ)` kD^'`$;āso5 =nug  B %đe/%h0NQMQJ!Xց)au^idw3P}Bw1]jE Gk[Z+➈DcGa |.|h4ց)+bSbH6P)^`$#$ Y`BNㆌxn.r xZZקQHhvec}*|FEd`S*z#pS kǍczߜ+Ecrq=rW>gw/zeQ?nYĂeNjQ +ƨ?`m,F)F Υ23J\siT9!)ekc ga(ڹ(jlW?M(oVn?UW][<_h~՛t3mvq?L =۸Nv9mi6޽8>ypWعH3f`"c*/Iko0?'[bC6GiwW x>|NWSbMF]G|-Cʪ7߉UW*϶Q//2ޝ2??<ܝ O[gos;mN"cg.$߫7/}]F3I>5n|FbҳS3O oޝ-$g(}53h{e[\#4>:Mu~O?%W],Y5u" QxW]msCAmS6wL~1 ֺo<qkcx#Q1n)$c@2F1>d& 1 1};{1B[K8`B7)$cEpGOF?m_P8UkW˺D0" 0/!tRAux ST)PPWWqp0V-a( :cP`7 )]=)z{|=i0a͉{ 9ѡ͉9{7'-3M9ڛ8d؛-bop>8<FCs Qq9''6Ώ1o)EBeq7*(Y)Q+}sA>9!![rs4RRysyPMxEH^q5ɕ>:aoeoN:^\V@ 7Qwn-; 3.)erX?Ǚ78u_V2@=b|1R 1E!iK'EY b b"֒hhFF,3"ݖ,a(kx11jbQIiA NmJɸ{ؔ_P1 xC;ʅ ՙ`R`ZHiRy"D%˩QR'Y^`e7l(Øm,HPu;+oiI d9! d- ' ˴ Љ35#6<}: ^ۗb$1G *%q"u5Iý00OCSeس=D;B  (xa/+ l g{ @d @d`ͨ@v e,c-k#kp4 ^3G4ꃉ0_0Q^6_  P,pvBX)ȣZ 0Ʉav_0&JiD./x׾~*$ԛu­Wߩe{+awjEli(tt 9*bV\%g21֓L4{[89?ktHX zvIh> 涩v \| _xcrYi4 AotĮM:"De?FGzN/ !+> ~`Qڞ<5K^@rp=??OP`FJA Ah2`2e0Ճ=V߸ØUxizR(~8OA],/s?})v@xʋ?&a'9*@Ju0O6LH,ؖ8alM=G,ZvpƂ>"Khh]L+۞'H.i5⏍v'xWп_1nLRKӠFp>gf Y'zd}FgՑl6(aR#HQ~,4S ŘI9$URU(M%!EBpM-3B_G垸ӁmozQmIv3d}egupC Tnq[q<#Q 4<RPJӜQP."yVc2 #)i-$RާYd|K 2Ġw)l66FUOȰ77>7lt΅Uu(:u7]!D݀?Lgfs@PxVUatz'c 0p]711,m`CJhjہeM1^Q€ ߙdvw.[5~ZYy߷&+9#--nK]b۷N'W{{z+8%\+xP^YFܽnۼnۼmn+lgJ>kcW܍q;WViգ )"jrSno? ^.\,7+W]rσ&1ZƖxPR848tH9 +f.;˝5ˋr2|ͷƽkԹJrXwOڱ/ h_hFNPZ=Nى7/CvRvIE Nm[`U:+d"ƋI6eo:N?E Sp,CR \BP)ϋ$R<ӚIK[X C<&Y}fXPXh@_h bCaehl4PV-ٹWՍW999;\- d:+;X` (FTWX(wMckrTF}x:M2OG)?Pk<ޅK96u\,W˫qYXe},b"$^n"m^}^lQw5wK,?A1H&Fh}|m^CjYuskA;Mvk*KP$HJKscB"Erdcao^6ūɢ|gp͖/O/r/3Ex)"lD<0GN5Ǹىى k Ƶu] wswՃV}aXPc< .E`(&85 t߃fãp=RXxEaa&N0X"wcH% @*1HT pg ܉u|mQ&9`X, > \mI~~)8 w@|V0^; -)DI/@ Q HDffEDTB)Pq ,0850! @~LQh IzFV\  N2ȆܷN'+o,GbVԠKX~.D|0 ӢN zV)` a՗kl6LJT"dÊ1*%Šd.&d,F)F Νĝ23J\s_5w㊴ijT&36&83Kf>S䭚[*1:hUo&}m;d/;3Kl Dbƃ]l†iQzI霚Vsinݥsj`.}W56V,yj E+Z"Yʍ%1))2< 1<:IdhGyJmt{ݡ)p䂈!IHʫ5 gR -^L7'<|4u?<<Ȓa6k:ϦEoODQ 4rdnB]rE??s,P%_Yx 03;Gs6P"5S9 Uj|q'VICsR%rKYvI| &#H/t|;u|etOJhZgVy<|֕T<}Zh%7b!{3_mWzXaygaY-e $v\JChf)AIaIkDʸTa)I).eFJ7G3'@Fߨ3ΔЩ* I4J爲 bR1)ͅ?Lfؿ $%γۙ]Ǩ3ڢQM#LAFl8.oQr:%$uR`rx`F('CmG:"@\D9HlHD?igiBwmmK~;mOvq,٧WK1E$!%4e5320BnWU]*i"D2QBr6;'_uhwj(5+@oكL"X.I*D,>8J$JΣrAeRBBD)i²XwT>&uKB[<\?}nW'#+iWwVq$s8B4GWi?sݯW5)͸`wo@rXJSLcI?>z8#Ⅸ7t/Nyx>%GtؙNJ5hkt>S<ڝώP"XžOI΀/z]vT-늨Zw`u0bZ2"QJ|mǎ11-{RT͞.)RogW-Tp y@ڨy@!6);]U Xް{wXCZ׋/^pgX=ܘ0h)%&c/l5E[7QL?q( zI+T? tZءOa֠*$_ƾoOEcUߞg3$ \R .;n/|̀;&O*'ɓ&73/yOH d%A("x@C "S p% C2|򭫓Xd]Q>=nۜw%IxMBDg;u a? }yyyyyy]84?u<.#Ӓ᧜rڈƮ%/iiy{aBؑZ?)8es!ZU1T6ǁpÙۼP&>Tb8n(4,$SU ` \AdPR(,=!Bt穇$f~B}b$}̍?⁔\l0 ڦ6:.x %!j'=Ub@)Ѩڥ䮬EZRzw> #8.'wM $mi#{/" yM-qkì|(O`4vyARtvjc)I ggUKGmRZF%->i+‚m@S ֺaW]|nPnonog].DN$RjqJv5})[r2[ǚg7c;͈ny79hMϖyEvctlg׿::yzr_j_-=ف=F@Ci<ȯO?9/sJRvZQJ?"Pۇ&)~tuZoϚ q6OAKeKrDQT$MVVf_ooή7纕ˍTxIܘ^՗jE^Iy\AUka2:hϬit4SfpЧK.qn:d׹WMwO} Nv-ۚvTGŠMèMl5ozD2 k u*4GZ)TiU4դYmObmasͶd{7rp{31%AKC" N.@.mY^>7C-ZuxvJr}p"^&eJ]Fk$fgEd㊒ O, f X^ƛS~NX_ J>r̷by=lJ;ވDΆs8{5#($\#0 r2΍+z'\?ӒfD45J?3IeKaN1EaC -A6hw$nxg@ƍh\B~nIS%Ds?edgֆu5qURs_ZbAgY1+'P4O,u$U+[xCchBA0-j;ӂ)ˁTnjJIb;+6#}3=M-i,iwbR1$C&k%3)%RE6u1<;ȉׂ ]9>`%0;NHsd6&OXy= 7S?տ_Em(ҽK:&*2Z!QJ$;Kj]qY =1.A1ƽD2%E%/9"2CУ<m}8; ]mbԟOd[zr"6"9Sn5~%D7")Ya_IeF12.k -X"(j+(!h%关OGv'سVee ntC91 xC3p8yFJ{hs={e~m CE$qXuYdzW6||'h8C΁%zl΂IDX d`33. ̑s="NȾҮ:Sc>)NzvBꠙ'c #vOS1=@r$1A%ۂ'dY ]E[&)doJq.6m/!M;xw܉A^_?¤nQ JSGRBR! E Y%Ng96N"{ /RWtVI2p?:$YG7#pjJR:Q9#K^eY"|ݏG!{<,&L5 OZ2Q㔨F`MG4 r֨nQqeoؘIW\ITU$θGk0Ե|ϞJsLU1+hvbkE|ɱ57I bF ,PST?|#3_}˫w듪Vw& aQx(^ /CP>7|x=0Gnpg` T=$ғ_i iO2C8BWoI!hcȔA}-;@q{x爓a:SSQ2毉+NG 3$м[qwu(4D4s曋bY,HODF 47+NcADuA6;A"fIV nȽ<4^Tf6]>UB>"\oƂ8*X7jgw9)¾sykLp8Q!e;e_fCqq&ewBP78޿f4Wo>{3Fd22^j;=yNp%[ɚ F>nY/j3GN b{|rx|㝚6~;~7p\a)EW22NQPQlH;" } Z(ҒֺԂJ _6ͱZ*ڌQ``?LR %iY wĂ͖y"u{OdBB\PŶ3@iNҠP~B=[AсWU.N* ȉGY{< @Sa*Itp3s)Fzr[io }OfLm%,klR-45B!G+Rf;t6:lQC i1Y[kתj^m|iY5&% Hb& ȞS!JGZTÈپosnN3f1YrŅYh]YsG+:bPgñ;xyX;u(&ը>`/ݕYɄ 6gRXS8G ad M;S}C%S%uq{HU@ {Q(DI1B[#^kn,GZhieɊ|?F`I([(wIfbf yu+%J,XO433 4(Y``_bTK'gSz' J_(m>63TR`cmQ ]QσSJk^~8;.6ZNKbt F7Q\ufr`ѹбw',v^:F0@E!HNS'6dUCONKG,P`sζ>%ud YN ԤL #0E,h%rJŃp~wxJ[jHյ*Nʘl  TƦDm|%BA!=/ LB DPh v(g:c` 1\ ='Ŵ:Ɵ?&:xc O,dB i NnMJF}8S(016PEf@mܱLԒvF}e}Nk5+n7O'bhc *BܚSqkpkmFnm}qD/6qǮAmQ֔\hr;[v,a&`|l+AU ٸf#Tdq/J9&tn_~8q'w&{l ;K,btktgVׯ:|U":v&'F2 pGY0" X~)?:~N] $J}Pv@e7{SXzs9{5^u%؃.gEܕ+ѫR*G7wfzxuFK\T?bZ_d_{r8׫+_>_9BRګr9:lFIJua DTT}1 &JeK!mQ~?L<+='Xq48L0@9ڢ%AVEOq̹tĐX9C "<+,)Bڢ^R{}$E|Tѕy;|[5r21]xsXm%V,`|fh> }z)V2յkX-&8w>6HVgw.ba.ȽY}șr~5ϷM],pqR$r~W1$[7(X w6fZ6Y4M 2+V>lboӘrU9qIu=2ڥUE=,?FW@iݒINzwz {clbkڗ~6]xl_h5|3-&RuP; `l+p7f * t+ B ٵȶ;w{'w\iU 7KJRUY?wf"st>sz2$5)ccBx> [.04dVϠC PSzY`Q̍4K7NV`1#FĩWCC(5a"2yXV~*qkb~Vݶ" |[6 (&anj^>ЙB*[(o7FõFR9L;aHX%Ce Th ++)RAB5~Ѫכ4/.Fb~1ZFCU!麪tq?@/@Pwg7>W-c,k}j呠~ $ZL3b&>*7{ﱼ0گ|Hӿn=yu~ c͚waYru~ܒ|"%S~dbz -)&m+bho-^ukCBq͑)Uki{#FCX\N;X!XYޛu&4պ!!߸,SRh<x&7 4I7ZMvדuSDžѝAvXɃJۏ݇(mvGqPc?񗷿w:tǰ Y_(!AiNH(mBkO /C!6#z'o!Cx-`Ԋ2ҏͲ,< & X| 8-܃R}Lp5:VD=:hu~*N9~񭥚Ŵ]UC'%iUr q]JN .xp0svwNՊ4SPiդ?6Kk>^,wg=jEImSâ1xG_8XX'*_VпxCl#l^9jL%J9Sߢ+ 9u/{d˼с8h !n_YÌ&t(uUD~<&]_˳ߑGHm=YzuM< 'wN(b}GZi.'Ů҇fboͿ豎C`$E2|$c@k" XIF<-RL3hJ:VPS7~fLtx:4;3ţ CU8F%sZ:D{FJT*/ 5dH!qLM扽s9APY%95=j6;=j3PУ.F89eWhcHJ_R 3: "*( 1D:8Hþ O7yxٸ&(f- TN)=ˊ8D'8HaGъVb:{©d)(GS%3*SR# Mx ׏1ؕH6 pRqwCE f=O)V&aq6ZBlFə%W(N xxJ kNST ^5a0%zoKw|,Rh/tʽLx ~?=&=۳Alou)Mkx՘ &QJ&:oGj"%٩^)I9z#!{em )+[J+i^-fɏyZQo(:BChB,LVۛ3nI_yEpOv(n_FK JbcӿQg&IyU R=aPL~,2G׻qSB  UAh-AFxQ3Qlq @ƪ0E8(Df E0 cqy@\9c8xM kYuν;g0Ŏwܦ)Ϩi_I݁v|4}I޹wleXP<;$TmOA7?/?4䡦πm_jC"s8 X{ɝ5;iz-ǸMX}x$v(n_F; F9`Wc9Хi KQOȦRDÏLihp[ pk=T^@Xe0p(])YԁHbK2OgHß05=U1mAJOw40cĴfې8 E4G8Gi=nɣnmSw)O'>+I}.O zU=$d$St5^ni܇ &zuB&DVe&X*h/bBR0I?12BbsF"D&gz8_!r ݗ?%A8d/}Ȓ"RVIqċù%{ꮋ׌'I0 Gq_̶pB]Fv3~s﫻 5џ$[J\]T*f,6O*흓NѸ'pC*bJj8_)w S׵ƿ`iRTG28hx4 ":zM~S8+TF?uF-ظ:jyWR|[iwRÔԆ!TSŅ#S^ʀZ'hMJh"Ӓ2iK@!0YCu PP( SδiϿru^n._~HW>QVZhy[1T2@c?gaɗC2,XiSI&("D`V<:DQ0QYF7aԼe1K'EN <"YYsuDQ]>= 3CoոPoqd'hy['5ڥzor~F0YZ5!g_v~3/9bN"`ɇ7phDvO&v6_% S hŷSAJ#q%ġv 8(\Qm g4W+ap]+lw*׆l`Y(zr ]2Oj_6< =Zgځ< z?w FHފ*E2r x(aَzrOkIC,K8͉ 1E9sS.P;X@(J)nFHxu5'lwZ";یu@e3{@]EP=fu)35~4&\KN9BD,=X]#ѸQ,{|OK Z9r8V޲pzådʡGWNu)V\'Ljve4:uyn>Q+F{c8Pj /R@E @0$8cqGڀ \R"kA1i0C⤜ ?lp()eJN)a_ b :F)%Ti%e)|yU pN fnL V'Rx*kA8tO|lrTYȝT@r?MJihn3uPZnR=m@Dfæϗ8wLJS;S27['o#㘔2Sj@)6kxw b4psF&q7r'Jw /[\2j;VL%c4/λl9|0s]tfq̍qZvy!TPT쇋k|w ~8s |s{97=jhK%?jqH#) s~]HT"VC5 (9x T+a[ |z Խ5O,2O,2=.bqA-6&[w6^n[W? +gb{Az.$SU9-xzh>q{|t^CJ*"C~^m=9>׶Hֽםp E3EX]uAݍaf/4iQ>٫v<:AN ~\Uޭy_>2 C~PD0< ;"+ێ;( p\ @ɩGf} L? >wsrhEŎѝ橯,(m>khTM8醭R α]S#f_`TkJ4IBuMGe5;laбSeI9#99iB<9Y]L?s#B'g* dg̈ltN};~YzY Kr|L}~n6O"+ K`s+z97dzH] vRh>0:]3\PM{r%PXu o>e@DQD%S+ Rj;V!M%CcՓV􊘰{1LcEQWWEe@O"ߑ?rs2bO^ߑsv7Cؗ2\rH +I+B%#H7($̜_Z%#u@@HT_'y@ⷨϻLWNr|M&sOg1W?V.1!?ܞf=r{)Ђ8睶 ީ(Cr<溽[#O$JNG-s\Gн&_O/eߐ^O~w B>AM{{(oCy,nv:_~b `rdz%uDWn%@P:ɶ+|h;+zHmA|Bzs1G*ES %6BF h:!Wц%Ps2&ŧB Y1$(M&,xG(x"&$+mpeB#OТ Yvn*_ x + a V6ʪSUi1$Â$EhVqIb O``5$ƾEU{";u;K[Րl,h-`2f}2?\]V75v?5DfϮ\3jj'?bVfMjnov~};sqN>FLM G_keE@ҿO^=n/&s=DjXngS]]Oo:8YûHޢr* BX^ޒC8GBPk'E4 $2s%ۋ mn<k6x)!F5zb@kV[o7ߪgfܖ$Gw~oqRw5gϛ̑xiƍF&HTk,T Ceֽ`Ei2L}ce_vFѝL͉)xHigC PPJb2,\2Q aPL ZbxF vP7~Z@TKe׍IBgFoEC=Iȭ%QD:cIBxr`Bʆ!TSdD9uKwAR4H봤L:ǒES鵠QJE/\GFl+`+m *!pJEg"k{2xx"P)A*$oJzI5 k-߭75F_-/& ǻxs/B_Ϳ 3?ۋV2fdm3x~T  ^UտBr6?=yڬݝQv_AѢ7|?ͤdQQt>\ܥ$!_v)~ncn]1(hݞB6v[牖j>$+N2[Ms ڭ+%mSz`8:OT!!_nȔ()D$c=牔?'R2¾<Y8y辖@Gz n}4PT ! Jwѣ 4gZĥaxPq@Zad=(jẻUrjDtb,\~{ts~h+2U&[`CEU(nҠil PHTeeܱ14ܣwEѰOA[)kWkXj}VhYGĵ_y5 \O=`% E 2R?`ďAnХn=dnX+Ƚ7]Ed7FTh5'GU!9`M+p+On?__oo6޾Y[g{f߾?Ck:WP.% j7j~>6Z}'ᢓo~+\M9dEz? ƷobzcUquSԥrY+[hBq %ܥ)(wz}0M+@zYb0I0 8T;[d"!o=j/{}ؓMdn"ձC/!SctU>ת||duk|EO;1gѺUGB}v0K$nZXs 5q}aœі65{ՓkѨӊ_bMtZ$T54W|fr}1 EC@·'Jv'۱+tTQTVCׄHPT 9xm-ET L]SK|A5W/7ho59T{4׫vRB?L˦f}n(+]*f lM(* ȳκAfEJ8u#V.F'ֻnͶ/?-[ 7YߝV7/O^7-LEi4./Gbu.us-j^w :T"y9J6-RTbZeNEVi%AvTZ8O%С֙TRɐ˴(h'TɐːX2MC-SQ+T\K[3Af{Fk,́G!w6U|d$s&5ՌZ f<ĔOVNiJEJ!,M4Ŧ2_,nnĘN;x-NƍK"r&dS~~˻4p["Qc;zny R#7;FsFlMw')̼o[/M^%&o23Ze"Lݩػk_Ǽ` V:DAx:nʏ`:lesP|hvcʲV-f!VNeeҒʟ҆W#ixW? Jь]!ʼkibs}uwmȫG uW=bF#%hDC+og6[=D01|-\geE] 2" aEVv$eA횬uX4EuY^"0QâUWeSYE6?TUtT)<܃Q+tGg\A19QN wuSI oķ|*0l|P$ -j:sV~p˜[ߟ·|NV#~ۋsiïYFp..d 2 /0,ʛYxΔhj*%xK؀s ٪ j߂4?4pdMؤ~l,ɇכ U6fڴ-& Mpl+~C@ %1gՇ+tM^v )#C|.p 9cR Ff8 W_ϷK܅E)c]WsޡZ9xJ:R`V#;q:gA}قMAp0VB;V4ygIynd]ax 6/S>MsMH͂VVU(D(2 5wLMAs%ZSU絣gJA #˷d07FBhD!BM.qZZPo¸\s驁 7_\[ST)(zTN㺩a|g{?1^"Mbh4 Iةz.y(8c=f }C!2UĤ΁WIgsw;n'k N'=,!^!Rjf3"lx>鸁6hv:>=K<ɵ݌{h<_"tZO3IOc I;V !⣁Iá6ҧ6zBof BޕL :'-^UɚOjuk}5R~?wnWX>%?3srsfl"9?eVgH~l~t p-ѩVn2:'d 9rM)g?No{7pj-I}GvKtz2݆#7r>n[*1>퐳`ytH6oS^uH]VNϯ>Y2/ߋa?9U+?}U~Y^+rVN:䖭̂9}O>~t7!FSJ3)| 󋓻+]ݷΥ7u6x MV|!dh-d=»>+<uNȞScZ,LUEUg(6T6w}nE3oP͆.\.Z흅bmbEO: AtrkժC JgBU7]:I=Eir2Yqʺ)6Eq̅(pD˗)$ x> kP*'T؜]8*ˆ`聽Dd@,.rWGqJ[#g{p!H! VjZd\to XtGg-HE1}xt)U#uo+tufu[mF(=ai˦ᥴ7Y]wHpfj.(/wjZ#WT%dۋwKher|&CF#&/޲4Q==5}|@֒!A&wwԔZFɘ!g96.sY"Y+6U.ge?\,G_qFJ&DꦾW 'Nx}Owrgd>;M?[gi.o١7hvՓkCC(7@3Piq vGopՃ?  @ 9& XzÑBqß> k-6g?1̋/E4?^:P\!Z_"Z&lX/&Uzpafe^Bek&U]YÈ{6'}VkEZ™[-6tebjTV!K/M˛JFC WԨ,:o j$ 1ν|PU-ݱҨHg-ـ1Xf~hWfALv; ПNq:N!W` 9rM)Rjcw gxA餾w;`Vv2^݆#7$7v0nN͂*I}GvYL ,hwBD7lGڔ_X+nȉ[k~"' ]9"Vqk@8L%cҐ \*++ PFFeuiCiM)_"8_%8q2eH6erWFӰ"ޙtV!/[p @ҬX t$z0=j sqy*80Mͨu6/!';(rFΪi+]%i{e=eVas@|@ ;,"j c~!Z|c4p 8\ ׯcLlFte}swޜUkܾYA[+_daF[+j,dmM~LL7̰^di#Ү^_vMfk S:'姫/-i]qL(*;t&mi\9\lQ(,yZk)d wEH+A:\&3r#[Ґ/Iˮ_(lUY8h0Y]u8]OJ;1mګ j(6E`&+|i݌/AEX6R\'}L10>5 3 x&㥵 m@풼;>l3o+~PzYf 6}$('KU@Qb=<>wQ{ RB$k4 g_!ܗ})Ճjvsyu}hX|5gV?wOHVը/s mQ- yZPIֺՅҞ*0B}jvr:kڜWZ7wu~VFT8Daf?mdLEh?џH0~ߟMLjB"ջi1 9]NL#˒Ov齶(;UEi!"Ik2mBȥ3̭ǔMpM[#2G+[ѥW_ZYV(RGRŬP3t,Ûo0 DkWO&ٖdŐiV]sg~٪ZWa}&ơbB76 $@x_8\no,gGr?27xםOP &?"3E\HW~۽F`w~^_߄!Wuj®?_lj %Sa s5ǬXNgsH;H(A0{`wO(yU˅ K?fbZ»W,QSr?f̖Rgc+^5Q+[X:F*camWM7KGR KK>@tt. Iak 12365ms (15:27:57.705) Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[1473017949]: [12.36595257s] [12.36595257s] END Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.705196 5008 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.705214 5008 trace.go:236] Trace[1594515651]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:27:46.616) (total time: 11088ms): Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[1594515651]: ---"Objects listed" error: 11088ms (15:27:57.705) Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[1594515651]: [11.088289036s] [11.088289036s] END Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.705251 5008 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 15:27:57 crc kubenswrapper[5008]: E0129 15:27:57.705320 5008 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.705457 5008 trace.go:236] Trace[206545253]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:27:46.020) (total time: 11685ms): Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[206545253]: ---"Objects listed" error: 11685ms (15:27:57.705) Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[206545253]: [11.685202424s] [11.685202424s] END Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.705483 5008 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.706098 5008 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.708104 5008 trace.go:236] Trace[637164502]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:27:46.310) (total time: 11397ms): Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[637164502]: ---"Objects listed" error: 11397ms (15:27:57.707) Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[637164502]: [11.39716807s] [11.39716807s] END Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.708139 5008 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.720986 5008 csr.go:261] certificate signing request csr-42t29 is approved, waiting to be issued Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.741979 5008 csr.go:257] certificate signing request csr-42t29 is issued Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.119984 5008 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.120058 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.247727 5008 apiserver.go:52] "Watching apiserver" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.276241 5008 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.276509 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.276857 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.276947 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.277007 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.277070 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.277096 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.277146 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.277147 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.277203 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.277281 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.278791 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.279835 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.279888 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.279952 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.279992 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.280198 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.280487 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.281189 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.283184 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.285069 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:09:22.9882038 +0000 UTC Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.307996 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.319165 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.330276 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.340526 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.352299 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.367035 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.370060 5008 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.380024 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.390956 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411188 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411240 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411264 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411288 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411310 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411335 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411359 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411381 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411402 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411422 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411443 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411505 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411532 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411558 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411581 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411604 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411659 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411686 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411709 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411732 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411519 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411890 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412164 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412607 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412677 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412754 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412774 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412833 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412823 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412950 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412987 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413024 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413047 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413048 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413105 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413137 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413169 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413192 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413216 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413305 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413264 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413396 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413420 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413445 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413468 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413494 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413518 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413543 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413567 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413596 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413624 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413648 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413680 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413860 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413897 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413996 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414012 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414223 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414248 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414265 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414297 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414346 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414693 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414803 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414830 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414842 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414906 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.414940 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:27:58.914914047 +0000 UTC m=+22.587768294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.415032 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.415068 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416269 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416393 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416425 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416458 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416484 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416538 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416570 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416594 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416619 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416643 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416666 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416693 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416716 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416723 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416791 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416824 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416853 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416886 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416911 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416940 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416966 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416991 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417017 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417044 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417088 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417115 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417141 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417152 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417165 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417190 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417214 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417239 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417266 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417293 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417319 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417342 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417348 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417367 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417392 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417416 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417440 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417465 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417489 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417513 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417536 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417558 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417583 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417607 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417629 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417652 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417674 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417698 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417719 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417743 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417752 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417804 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417830 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417856 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417877 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417904 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417927 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417949 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417972 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417999 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418025 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418048 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418071 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418093 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418098 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418115 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418136 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418158 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418180 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418201 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418222 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418243 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418265 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418287 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418291 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418316 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418339 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418365 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418390 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418415 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418439 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418464 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418488 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418491 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418510 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418532 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418555 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418577 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418600 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418623 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418645 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418667 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418689 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418712 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418733 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418759 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418800 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418824 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418848 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418854 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418873 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418923 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418950 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418972 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418994 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419015 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419038 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419062 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419087 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419113 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419135 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419156 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419179 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419205 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419227 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419251 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419274 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419297 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419322 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419347 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419369 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419393 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419417 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419439 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419466 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419491 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419514 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419536 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419558 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419579 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419602 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419625 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419648 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419672 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419694 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419722 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419744 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419769 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419812 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419835 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419858 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419879 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419902 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419924 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419947 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419970 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419994 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420018 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420244 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420269 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420292 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420317 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420340 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420363 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420390 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420415 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420438 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420461 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420485 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420530 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420560 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420589 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420624 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420653 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420682 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420710 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420737 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420767 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420883 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420912 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420939 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420967 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420994 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421079 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421096 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421110 5008 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421124 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421137 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421149 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421162 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421174 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421186 5008 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421202 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421216 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421228 5008 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421241 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421254 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421266 5008 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421280 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421292 5008 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421304 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421317 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421330 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421364 5008 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421378 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421392 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421404 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421416 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421429 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421443 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421460 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421473 5008 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421488 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421501 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421514 5008 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421526 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421538 5008 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421551 5008 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419035 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419092 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419239 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419302 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419457 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422390 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419840 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420059 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420367 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420962 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421039 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421406 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421558 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.421649 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.422601 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:27:58.92258319 +0000 UTC m=+22.595437437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421699 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422624 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421881 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422083 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422131 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422314 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422333 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422526 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422852 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423027 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423042 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423137 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423150 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423244 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423400 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423501 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423637 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423794 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423918 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424035 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424048 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424428 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424533 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424568 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424633 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.425523 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.425547 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.425817 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.426053 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.426374 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.426613 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.426652 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.426732 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427020 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427022 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427285 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427756 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427846 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427918 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.428163 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.428250 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.429096 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.429866 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.433035 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.437358 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.437592 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.437429 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.437930 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.439033 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.439219 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.439328 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.439584 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.439939 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.440193 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.441579 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.442565 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.443332 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.443814 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.444305 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.444509 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.445849 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.446895 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.464236 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.464345 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.464518 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.464285 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.465034 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.465434 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.465535 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.465854 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.466563 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.468534 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.468589 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.468720 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.468959 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469028 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469166 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469395 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469435 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469617 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469638 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469649 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469989 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.470280 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469890 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.472408 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.475439 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.476514 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.476911 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.477323 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.477636 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:27:58.977171623 +0000 UTC m=+22.650025860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.479214 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.479318 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.479328 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.479925 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.480848 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.481046 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.482198 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.483155 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.484273 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.484899 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.485201 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.485279 5008 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.485363 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.485463 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.485888 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.486836 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.491115 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.491752 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.499420 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.499594 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.499686 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.499752 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.499901 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:27:58.999878256 +0000 UTC m=+22.672732493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501121 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501386 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501527 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501645 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" exitCode=255 Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501756 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761"} Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501878 5008 scope.go:117] "RemoveContainer" containerID="824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501767 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.502399 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.502481 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.502740 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.502754 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.502828 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.506339 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.506744 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.507144 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.508035 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.508083 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.508568 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.510589 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.510885 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.510903 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.510916 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.510959 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:27:59.010943165 +0000 UTC m=+22.683797402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.511184 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.512072 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.512535 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.512815 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.514016 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.515122 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.515154 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.515192 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.515201 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.517041 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.517323 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.517621 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.517756 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.518375 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.518490 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.519753 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.519934 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.520060 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.520254 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.520483 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.520554 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.520771 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.521007 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.522203 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.522683 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523243 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523562 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523628 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523673 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523753 5008 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523767 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523796 5008 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523811 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523823 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523835 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523848 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523859 5008 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523873 5008 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523885 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523896 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523908 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523926 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523937 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523949 5008 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523958 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523970 5008 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523980 5008 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523990 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524000 5008 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524011 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524022 5008 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524033 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524043 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524053 5008 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524138 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: W0129 15:27:58.524216 5008 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes/kubernetes.io~secret/certs Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524229 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524351 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524392 5008 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524407 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524420 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524444 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524457 5008 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524470 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524480 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524491 5008 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524500 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524511 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524521 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524532 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524543 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524555 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524566 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524576 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524586 5008 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524598 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524611 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524623 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524636 5008 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524647 5008 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524658 5008 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524668 5008 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524680 5008 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524691 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525145 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525228 5008 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525246 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525258 5008 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525269 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525279 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525290 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525300 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525314 5008 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525327 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525337 5008 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525350 5008 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525361 5008 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525357 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525373 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525385 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525404 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525415 5008 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525425 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525436 5008 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525456 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525467 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525477 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525486 5008 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525495 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525504 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525514 5008 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525525 5008 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525535 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525545 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525582 5008 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525594 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525606 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525616 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525626 5008 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525636 5008 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525646 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525657 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525669 5008 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525679 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525690 5008 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525700 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525711 5008 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525721 5008 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525742 5008 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525754 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525764 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525791 5008 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525804 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525815 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525825 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525837 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525847 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525857 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525869 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525881 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525891 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525901 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525911 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525921 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525931 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525942 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525953 5008 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525963 5008 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525974 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525984 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525996 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526006 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526017 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526030 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526041 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526052 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526064 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526074 5008 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526086 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526098 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526109 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526120 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526131 5008 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526144 5008 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526156 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526168 5008 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526179 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526190 5008 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526202 5008 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526212 5008 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526224 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526262 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526275 5008 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526287 5008 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526299 5008 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526310 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526321 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526330 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526341 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526352 5008 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526363 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526373 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526384 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526394 5008 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526733 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526767 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.529111 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.529414 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.529512 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.531417 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.533625 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.535016 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.544736 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.545252 5008 scope.go:117] "RemoveContainer" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.545888 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.553331 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.555500 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.561644 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.572391 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.582670 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.588482 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.594459 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.595494 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.601164 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.608466 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.623854 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"message\\\":\\\"W0129 15:27:40.899468 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:27:40.899809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700460 cert, and key in /tmp/serving-cert-2093150862/serving-signer.crt, /tmp/serving-cert-2093150862/serving-signer.key\\\\nI0129 15:27:41.249157 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:27:41.261429 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:27:41.261720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:41.263585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2093150862/tls.crt::/tmp/serving-cert-2093150862/tls.key\\\\\\\"\\\\nF0129 15:27:41.657916 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627188 5008 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627683 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627752 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627851 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627923 5008 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627978 5008 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.628037 5008 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.628117 5008 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.628205 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.628269 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.651107 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.661831 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.680420 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.697530 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.711495 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.743865 5008 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 15:22:57 +0000 UTC, rotation deadline is 2026-10-22 21:15:26.897945549 +0000 UTC Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.743928 5008 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6389h47m28.154019666s for next certificate rotation Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.866553 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wtvvb"] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.867178 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.869336 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.869565 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.873144 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.894444 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.906631 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.917368 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.927327 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"message\\\":\\\"W0129 15:27:40.899468 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:27:40.899809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700460 cert, and key in /tmp/serving-cert-2093150862/serving-signer.crt, /tmp/serving-cert-2093150862/serving-signer.key\\\\nI0129 15:27:41.249157 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:27:41.261429 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:27:41.261720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:41.263585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2093150862/tls.crt::/tmp/serving-cert-2093150862/tls.key\\\\\\\"\\\\nF0129 15:27:41.657916 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.931529 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.931607 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtnst\" (UniqueName: \"kubernetes.io/projected/2dede057-dcce-4302-8efe-e2c3640308ec-kube-api-access-mtnst\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.931649 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2dede057-dcce-4302-8efe-e2c3640308ec-hosts-file\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.931673 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.931834 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:27:59.931766162 +0000 UTC m=+23.604620399 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.931878 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.931968 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:27:59.931944297 +0000 UTC m=+23.604798704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.941596 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.954409 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.968847 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.980088 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.990694 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032396 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtnst\" (UniqueName: \"kubernetes.io/projected/2dede057-dcce-4302-8efe-e2c3640308ec-kube-api-access-mtnst\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032435 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032455 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032481 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2dede057-dcce-4302-8efe-e2c3640308ec-hosts-file\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032499 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032565 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032628 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:00.032613495 +0000 UTC m=+23.705467732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032838 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032857 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032870 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032837 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2dede057-dcce-4302-8efe-e2c3640308ec-hosts-file\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032904 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:00.032893112 +0000 UTC m=+23.705747349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032908 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032954 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032976 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.033008 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:00.032997525 +0000 UTC m=+23.705851762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.051438 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtnst\" (UniqueName: \"kubernetes.io/projected/2dede057-dcce-4302-8efe-e2c3640308ec-kube-api-access-mtnst\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.181494 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.230927 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-78bl2"] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.231508 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-42hcz"] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.231673 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gk9q8"] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.231953 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.232333 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.232569 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.234551 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.234870 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.235145 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.235658 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.235761 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.235842 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.236308 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.236544 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.236561 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.236698 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.236743 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.238101 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 15:27:59 crc kubenswrapper[5008]: W0129 15:27:59.239725 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dede057_dcce_4302_8efe_e2c3640308ec.slice/crio-cd2df9713cf65dda3bee2262acc3ddea403a1fb82fd0fbfc4cb4187c1c4d87fc WatchSource:0}: Error finding container cd2df9713cf65dda3bee2262acc3ddea403a1fb82fd0fbfc4cb4187c1c4d87fc: Status 404 returned error can't find the container with id cd2df9713cf65dda3bee2262acc3ddea403a1fb82fd0fbfc4cb4187c1c4d87fc Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.247663 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.261448 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.277022 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.285499 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:04:11.636766864 +0000 UTC Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.291657 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.304850 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.331075 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.332710 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.333492 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.333378 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"message\\\":\\\"W0129 15:27:40.899468 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:27:40.899809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700460 cert, and key in /tmp/serving-cert-2093150862/serving-signer.crt, /tmp/serving-cert-2093150862/serving-signer.key\\\\nI0129 15:27:41.249157 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:27:41.261429 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:27:41.261720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:41.263585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2093150862/tls.crt::/tmp/serving-cert-2093150862/tls.key\\\\\\\"\\\\nF0129 15:27:41.657916 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.334143 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.335127 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.335615 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336270 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6blck\" (UniqueName: \"kubernetes.io/projected/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-kube-api-access-6blck\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336313 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-bin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336337 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg75x\" (UniqueName: \"kubernetes.io/projected/cdd8ae23-3f9f-49f8-928d-46dad823fde4-kube-api-access-tg75x\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336371 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336394 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-etc-kubernetes\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336413 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-daemon-config\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336433 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336452 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cni-binary-copy\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336474 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-system-cni-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336502 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-k8s-cni-cncf-io\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336521 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-multus\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336538 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336540 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-hostroot\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336848 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-os-release\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336961 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trwfk\" (UniqueName: \"kubernetes.io/projected/fa065d0b-d690-4a7d-9079-a8f976a7aca3-kube-api-access-trwfk\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337064 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337063 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cnibin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337297 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-proxy-tls\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337573 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-mcd-auth-proxy-config\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337665 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-os-release\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337694 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337762 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-system-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337806 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cnibin\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337836 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-binary-copy\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337866 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-multus-certs\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337885 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-netns\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337909 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-rootfs\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337932 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-socket-dir-parent\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337962 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-kubelet\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337987 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-conf-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.338329 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.338852 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.339738 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.340512 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.342050 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.342710 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.343548 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.343890 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.345029 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.345589 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.347514 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.349081 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.349763 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.350698 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.351336 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.351770 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.355222 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.357046 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.357699 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.359737 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.361199 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.361673 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.362251 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.363531 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.364047 5008 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.364196 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.364547 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.366412 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.367180 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.371626 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.373349 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.374012 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.374346 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.374927 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.375532 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.377164 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.377661 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.378874 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.379524 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.380523 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.382845 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.383482 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.386315 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.387835 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.388736 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.389814 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.390396 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.391077 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.392108 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.392661 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.393591 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.403947 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.416303 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.426340 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"message\\\":\\\"W0129 15:27:40.899468 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:27:40.899809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700460 cert, and key in /tmp/serving-cert-2093150862/serving-signer.crt, /tmp/serving-cert-2093150862/serving-signer.key\\\\nI0129 15:27:41.249157 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:27:41.261429 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:27:41.261720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:41.263585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2093150862/tls.crt::/tmp/serving-cert-2093150862/tls.key\\\\\\\"\\\\nF0129 15:27:41.657916 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.435482 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438529 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-system-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438565 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cnibin\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438589 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-binary-copy\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438614 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-multus-certs\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438668 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cnibin\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438704 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-multus-certs\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438636 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-rootfs\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438764 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-netns\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438798 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-system-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438810 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-rootfs\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438860 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-socket-dir-parent\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438814 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-socket-dir-parent\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438901 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-netns\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438901 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-kubelet\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438957 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-kubelet\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438994 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-conf-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439020 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6blck\" (UniqueName: \"kubernetes.io/projected/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-kube-api-access-6blck\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439038 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-bin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439055 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg75x\" (UniqueName: \"kubernetes.io/projected/cdd8ae23-3f9f-49f8-928d-46dad823fde4-kube-api-access-tg75x\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439063 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-conf-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439125 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439127 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-bin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439071 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439183 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-etc-kubernetes\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439218 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cni-binary-copy\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439242 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-daemon-config\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439265 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439289 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-system-cni-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439311 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-multus\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439334 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-hostroot\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439359 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-system-cni-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439368 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-k8s-cni-cncf-io\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439406 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-etc-kubernetes\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439420 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-os-release\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439444 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trwfk\" (UniqueName: \"kubernetes.io/projected/fa065d0b-d690-4a7d-9079-a8f976a7aca3-kube-api-access-trwfk\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439469 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cnibin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439489 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-proxy-tls\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439511 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-mcd-auth-proxy-config\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439534 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439557 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-os-release\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439639 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-os-release\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439663 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-binary-copy\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439676 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-multus\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439716 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cnibin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439869 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-hostroot\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439890 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439899 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-k8s-cni-cncf-io\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439971 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-os-release\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.440037 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-daemon-config\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.440091 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cni-binary-copy\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.440435 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-mcd-auth-proxy-config\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.440510 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.442917 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-proxy-tls\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.448547 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.459844 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trwfk\" (UniqueName: \"kubernetes.io/projected/fa065d0b-d690-4a7d-9079-a8f976a7aca3-kube-api-access-trwfk\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.461316 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6blck\" (UniqueName: \"kubernetes.io/projected/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-kube-api-access-6blck\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.461754 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.463958 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg75x\" (UniqueName: \"kubernetes.io/projected/cdd8ae23-3f9f-49f8-928d-46dad823fde4-kube-api-access-tg75x\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.479578 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.491547 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.497964 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.503051 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.506316 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.508806 5008 scope.go:117] "RemoveContainer" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.509001 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.510255 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wtvvb" event={"ID":"2dede057-dcce-4302-8efe-e2c3640308ec","Type":"ContainerStarted","Data":"63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.510295 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wtvvb" event={"ID":"2dede057-dcce-4302-8efe-e2c3640308ec","Type":"ContainerStarted","Data":"cd2df9713cf65dda3bee2262acc3ddea403a1fb82fd0fbfc4cb4187c1c4d87fc"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.511151 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e77b0d1917796cde25b55664bf23efd7ed77639f9bdcac08bf26dbbb557870a9"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.512747 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.512887 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.512986 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"477179bf249a19b16e085eee86630532632185d70cd428684e1abfdf97d53f95"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.513821 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.513882 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"11b3eb18bc1e054c634937244422a000e1ad2ecccff77ecb72f04109c5cbf34a"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.519120 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.520596 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.531838 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.544913 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.545415 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.553707 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: W0129 15:27:59.555531 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca0fcb2d_733d_4bde_9bbf_3f7082d0e244.slice/crio-5322962a9ed8ffae5b21db73f40150f5b6ddce142937397a45c4a59534a8a608 WatchSource:0}: Error finding container 5322962a9ed8ffae5b21db73f40150f5b6ddce142937397a45c4a59534a8a608: Status 404 returned error can't find the container with id 5322962a9ed8ffae5b21db73f40150f5b6ddce142937397a45c4a59534a8a608 Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.559373 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.559351 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: W0129 15:27:59.573091 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa065d0b_d690_4a7d_9079_a8f976a7aca3.slice/crio-feeb0c23c07da0fef24a102842931147d1529065121b9e0131ef3ac1a002c490 WatchSource:0}: Error finding container feeb0c23c07da0fef24a102842931147d1529065121b9e0131ef3ac1a002c490: Status 404 returned error can't find the container with id feeb0c23c07da0fef24a102842931147d1529065121b9e0131ef3ac1a002c490 Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.576891 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.595959 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.613608 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.624902 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqg9w"] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.626005 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.628569 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.628714 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.628879 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.628960 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.630187 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.630416 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.630680 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.648825 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.649058 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.665465 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.679156 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.696413 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.710907 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.739311 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744734 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744792 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744827 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744873 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744892 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744910 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744928 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744948 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744996 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745021 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745056 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745074 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745091 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2xcc\" (UniqueName: \"kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745111 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745634 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745830 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745889 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745991 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.746024 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.779103 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.820290 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846624 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846697 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846713 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846752 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2xcc\" (UniqueName: \"kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846769 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846773 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846829 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846853 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846809 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846867 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846835 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846908 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846928 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846938 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846956 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846962 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846972 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846991 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846992 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847005 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847019 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847038 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847054 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847071 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847087 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847110 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847142 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847157 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847772 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847806 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847864 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847900 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847903 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847929 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847951 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847975 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.848234 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.848322 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.851397 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.857520 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.886457 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2xcc\" (UniqueName: \"kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.918613 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.947599 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.947743 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.947775 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:01.947747347 +0000 UTC m=+25.620601584 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.947875 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.947947 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:01.947926692 +0000 UTC m=+25.620780949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.958065 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.964684 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: W0129 15:27:59.971498 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d092513_7735_4c98_9734_57bc46b99280.slice/crio-3ed021c49019edf6db353db02ef3c36191fef92186df2ed16a92920dd439b3d2 WatchSource:0}: Error finding container 3ed021c49019edf6db353db02ef3c36191fef92186df2ed16a92920dd439b3d2: Status 404 returned error can't find the container with id 3ed021c49019edf6db353db02ef3c36191fef92186df2ed16a92920dd439b3d2 Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.001588 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.042019 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.048731 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.048772 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.048813 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.048904 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.048918 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.048928 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.048967 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:02.048951601 +0000 UTC m=+25.721805838 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049230 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049248 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049255 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049276 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:02.049269829 +0000 UTC m=+25.722124066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049304 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049321 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:02.049316441 +0000 UTC m=+25.722170678 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.088183 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.121531 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.163149 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.200500 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.252906 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.283975 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.285960 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:41:22.739610229 +0000 UTC Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.323071 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.323111 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.323177 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.323231 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.323330 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.323475 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.323478 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.362645 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.400206 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.441211 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.517950 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerStarted","Data":"a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.518011 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerStarted","Data":"9483bcd1b2d3148e3e1c18b543c80ce2fa9143c3acccb478fc92b911e23621f6"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.519646 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456" exitCode=0 Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.519708 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.519729 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerStarted","Data":"feeb0c23c07da0fef24a102842931147d1529065121b9e0131ef3ac1a002c490"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.521720 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.521756 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.521770 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"5322962a9ed8ffae5b21db73f40150f5b6ddce142937397a45c4a59534a8a608"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.523184 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" exitCode=0 Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.523255 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.523292 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"3ed021c49019edf6db353db02ef3c36191fef92186df2ed16a92920dd439b3d2"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.532140 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.552696 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.565761 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.597391 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.635579 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.684342 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.722508 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.761695 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.800295 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.838410 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.888884 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.918144 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.964766 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.000665 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.044830 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.084483 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.120826 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.123578 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qj8wb"] Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.124002 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.150568 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.169636 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.189718 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.209066 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.238911 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.263509 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mvmz\" (UniqueName: \"kubernetes.io/projected/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-kube-api-access-8mvmz\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.263554 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-host\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.263586 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-serviceca\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.279964 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.286124 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:05:04.607345531 +0000 UTC Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.319037 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.358975 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.364453 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mvmz\" (UniqueName: \"kubernetes.io/projected/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-kube-api-access-8mvmz\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.364518 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-host\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.364578 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-serviceca\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.364751 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-host\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.366079 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-serviceca\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.405346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mvmz\" (UniqueName: \"kubernetes.io/projected/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-kube-api-access-8mvmz\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.417847 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.455220 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.456882 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: W0129 15:28:01.467381 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ffbfcf6_99e5_450c_8c72_b2db9365d93e.slice/crio-a6b80a39848f736368b0549175dee0c41b1ba8ed0a33449123018e7ca70c4f44 WatchSource:0}: Error finding container a6b80a39848f736368b0549175dee0c41b1ba8ed0a33449123018e7ca70c4f44: Status 404 returned error can't find the container with id a6b80a39848f736368b0549175dee0c41b1ba8ed0a33449123018e7ca70c4f44 Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.499507 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.537935 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.540531 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qj8wb" event={"ID":"9ffbfcf6-99e5-450c-8c72-b2db9365d93e","Type":"ContainerStarted","Data":"a6b80a39848f736368b0549175dee0c41b1ba8ed0a33449123018e7ca70c4f44"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.544315 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.545291 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerStarted","Data":"be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.549164 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.549208 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.549220 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.549232 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.549244 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.583475 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.652962 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.676069 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.703470 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.738315 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.784602 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.817086 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.860223 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.899302 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.939465 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.979401 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.979489 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:01 crc kubenswrapper[5008]: E0129 15:28:01.979559 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:05.979526084 +0000 UTC m=+29.652380321 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:01 crc kubenswrapper[5008]: E0129 15:28:01.979609 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:01 crc kubenswrapper[5008]: E0129 15:28:01.979664 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:05.979643528 +0000 UTC m=+29.652497765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.984856 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.020275 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.060142 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.080311 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.080370 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.080405 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080461 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080464 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080496 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080508 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:06.080494021 +0000 UTC m=+29.753348258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080511 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080554 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:06.080542562 +0000 UTC m=+29.753396799 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080609 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080646 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080687 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080753 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:06.080732377 +0000 UTC m=+29.753586674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.097513 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.140180 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.180553 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.222409 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.258986 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.287172 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:59:45.234006706 +0000 UTC Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.323857 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.323876 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.324079 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.324238 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.324336 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.324066 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.556518 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.558178 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qj8wb" event={"ID":"9ffbfcf6-99e5-450c-8c72-b2db9365d93e","Type":"ContainerStarted","Data":"eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af"} Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.561196 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e" exitCode=0 Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.561281 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e"} Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.576399 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.592827 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.609562 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.619744 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.633602 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.649194 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.661038 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.672679 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.685996 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.708362 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.720061 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.739663 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.780129 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.819845 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.863973 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.898292 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.937980 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.982155 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.019483 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.057957 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.106611 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.139268 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.179466 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.219999 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.260378 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.288080 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:08:03.429161567 +0000 UTC Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.302469 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.337615 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.382591 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.417037 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.466392 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.567155 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36" exitCode=0 Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.567237 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36"} Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.579162 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.599468 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.616576 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.629715 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.662428 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.700347 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.740425 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.783540 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.820228 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.862752 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.903967 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.942085 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.981669 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.019181 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.073059 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.105812 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.107535 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.107579 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.107593 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.107744 5008 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.115430 5008 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.115912 5008 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.117400 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.117544 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.117620 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.117684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.117751 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.153941 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.159686 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.159738 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.159754 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.159774 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.159810 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.174169 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.177646 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.177690 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.177700 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.177718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.177731 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.190339 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.193807 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.193847 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.193860 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.193877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.193890 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.205237 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.208238 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.208278 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.208290 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.208304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.208313 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.218920 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.219036 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.220408 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.220435 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.220444 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.220459 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.220469 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.289163 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:38:40.177708255 +0000 UTC Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.322985 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.323020 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.323150 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.323297 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.323457 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.323671 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.323813 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.324009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.324152 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.324303 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.324451 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.426643 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.427009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.427021 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.427036 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.427048 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.529707 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.529769 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.529855 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.529885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.529904 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.574503 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.578612 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b" exitCode=0 Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.578671 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.599297 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.613416 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.628824 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.632474 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.632501 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.632510 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.632523 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.632532 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.638255 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.649599 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.662339 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.681708 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.693975 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.708125 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.722723 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.733395 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.735040 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.735082 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.735093 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.735109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.735120 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.751361 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.767021 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.785461 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.797180 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.837457 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.837494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.837506 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.837522 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.837533 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.891014 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.891871 5008 scope.go:117] "RemoveContainer" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.892071 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.940430 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.940474 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.940482 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.940496 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.940504 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.042266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.042301 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.042308 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.042322 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.042333 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.145641 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.145698 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.145712 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.145844 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.145855 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.248576 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.248639 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.248653 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.248676 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.248692 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.290309 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:32:22.959970613 +0000 UTC Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.352150 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.352201 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.352212 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.352232 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.352244 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.456265 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.456318 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.456336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.456370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.456390 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.559766 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.559927 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.559955 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.559989 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.560016 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.589626 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerStarted","Data":"83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.609266 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.628528 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.645628 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.663126 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.663216 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.663241 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.663270 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.663288 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.670935 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.689626 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.706119 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.730546 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.747896 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.766297 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.766428 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.766446 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.766466 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.766480 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.777403 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.792434 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.805410 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.816205 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.829723 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.847768 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.865595 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.869647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.869699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.869711 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.869735 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.869753 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.972845 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.972890 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.972906 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.972930 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.972946 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.019267 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.019453 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.019577 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.019673 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.019585873 +0000 UTC m=+37.692440110 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.019732 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.019722547 +0000 UTC m=+37.692576784 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.076517 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.076567 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.076579 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.076599 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.076611 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.120986 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.121046 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.121073 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121199 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121235 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121251 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121260 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121303 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.12128885 +0000 UTC m=+37.794143087 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121348 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.12131013 +0000 UTC m=+37.794164397 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121524 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121579 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121606 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121689 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.12166748 +0000 UTC m=+37.794521797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.179519 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.179574 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.179584 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.179599 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.179608 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.282470 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.282536 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.282549 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.282568 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.282580 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.290947 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:15:53.281470826 +0000 UTC Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.323347 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.323382 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.323489 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.323676 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.323826 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.323939 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.385426 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.385480 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.385494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.385509 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.385824 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.490605 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.490662 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.490671 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.490688 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.490697 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.593380 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.593432 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.593445 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.593465 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.593898 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.597920 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3" exitCode=0 Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.597973 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.619756 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.635134 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.663990 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.681450 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.695219 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.697375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.697397 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.697406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.697419 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.697428 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.713244 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.729887 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.746423 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.760172 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.777817 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.795112 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.805365 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.805406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.805416 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.805433 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.805447 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.808683 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.829718 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.844301 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.858582 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.908867 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.908917 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.908927 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.908947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.908960 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.012730 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.012767 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.012800 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.012819 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.012831 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.020737 5008 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.115288 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.115324 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.115333 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.115349 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.115361 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.217609 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.217643 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.217653 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.217669 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.217681 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.291319 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:18:08.711078548 +0000 UTC Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.320502 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.320555 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.320568 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.320590 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.320605 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.343390 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.358072 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.375326 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.386001 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.398237 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.414384 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.422700 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.422746 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.422755 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.422773 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.422801 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.425129 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.442511 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.452234 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.464951 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.482624 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.493417 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.507535 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.526315 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.526361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.526375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.526394 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.526408 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.530548 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.546463 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.603807 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e" exitCode=0 Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.604099 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.613212 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.613668 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.613704 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.619383 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.629231 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.629271 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.629282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.629304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.629327 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.630065 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.644126 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.746092 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.746128 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.746136 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.746153 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.746163 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.748734 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.749246 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.754426 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.770906 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.795851 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.810283 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.826279 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.835666 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.847527 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.854866 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.854895 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.854904 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.854916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.854925 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.865666 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.876801 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.891817 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.906254 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.919753 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.935042 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.949460 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.958355 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.958402 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.958414 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.958463 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.958487 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.970080 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.981149 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.993442 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.004451 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.014234 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.031334 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.042047 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.055325 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.060526 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.060551 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.060559 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.060571 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.060581 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.068885 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.084562 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.098903 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.116977 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.129993 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.163134 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.163179 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.163191 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.163210 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.163223 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.266205 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.266257 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.266267 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.266336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.266349 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.291633 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:01:14.073301513 +0000 UTC Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.323032 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.323063 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.323124 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:08 crc kubenswrapper[5008]: E0129 15:28:08.323205 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:08 crc kubenswrapper[5008]: E0129 15:28:08.323336 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:08 crc kubenswrapper[5008]: E0129 15:28:08.323405 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.369082 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.369125 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.369144 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.369163 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.369174 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.471885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.471922 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.471930 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.471944 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.471955 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.574537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.574580 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.574591 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.574609 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.574619 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.619409 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerStarted","Data":"bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.619491 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.640654 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.656004 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.667411 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.677960 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.678026 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.678045 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.678070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.678129 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.698625 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.713925 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.731288 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.748442 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.764227 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.780601 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.780649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.780662 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.780677 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.780688 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.785248 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.801730 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.813497 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.825227 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.841044 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.862622 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.877835 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.883906 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.883941 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.883951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.883965 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.883975 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.986199 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.986256 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.986270 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.986288 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.986302 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.091256 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.091320 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.091336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.091355 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.091370 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.193696 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.193772 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.193825 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.193898 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.193937 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.292313 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:20:46.140382029 +0000 UTC Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.296211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.296257 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.296266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.296281 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.296290 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.398571 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.398642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.398665 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.398695 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.398717 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.501657 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.501738 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.501751 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.501769 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.501799 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.604629 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.604705 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.604729 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.604763 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.604822 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.622331 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.707192 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.707234 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.707245 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.707262 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.707273 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.809706 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.809763 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.809777 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.809830 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.809855 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.912192 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.912259 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.912271 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.912290 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.912302 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.015412 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.015473 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.015486 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.015508 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.015522 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.124194 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.124272 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.124285 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.124305 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.124323 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.227605 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.227647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.227659 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.227673 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.227685 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.293054 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:18:23.896959013 +0000 UTC Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.323457 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.323475 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.323538 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:10 crc kubenswrapper[5008]: E0129 15:28:10.323648 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:10 crc kubenswrapper[5008]: E0129 15:28:10.323717 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:10 crc kubenswrapper[5008]: E0129 15:28:10.323812 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.330570 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.330608 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.330619 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.330636 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.330647 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.433676 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.433722 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.433733 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.433752 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.433763 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.537191 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.537280 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.537300 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.537324 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.537341 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.643825 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.643873 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.643885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.643901 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.643914 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.747266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.747345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.747373 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.747406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.747431 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.851252 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.851312 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.851320 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.851344 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.851360 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.954220 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.954266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.954278 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.954293 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.954306 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.057468 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.057519 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.057527 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.057542 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.057551 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.160616 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.160700 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.160720 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.160743 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.160763 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.263704 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.264143 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.264167 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.264195 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.264216 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.293527 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:23:51.504832049 +0000 UTC Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.368174 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.368251 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.368275 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.368323 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.368353 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.471687 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.471747 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.471759 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.471805 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.471820 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.581471 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.581544 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.581563 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.581596 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.581613 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.616757 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp"] Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.617281 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.620452 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.620473 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.631503 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/0.log" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.634963 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee" exitCode=1 Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.635034 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.636232 5008 scope.go:117] "RemoveContainer" containerID="6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.640066 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.659217 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.672980 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.680287 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f5a0b69-5edd-467c-a822-093f1689df1d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.680357 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.680407 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.680433 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2fz\" (UniqueName: \"kubernetes.io/projected/4f5a0b69-5edd-467c-a822-093f1689df1d-kube-api-access-gq2fz\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.683945 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.683994 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.684007 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.684028 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.684043 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.693219 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.715000 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.728617 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.739411 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.755531 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.769747 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.780641 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.780894 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f5a0b69-5edd-467c-a822-093f1689df1d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.780932 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.780957 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.780977 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2fz\" (UniqueName: \"kubernetes.io/projected/4f5a0b69-5edd-467c-a822-093f1689df1d-kube-api-access-gq2fz\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.782069 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.782323 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793379 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f5a0b69-5edd-467c-a822-093f1689df1d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793867 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793909 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793922 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793941 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793954 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.797092 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2fz\" (UniqueName: \"kubernetes.io/projected/4f5a0b69-5edd-467c-a822-093f1689df1d-kube-api-access-gq2fz\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.799213 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.812309 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.825967 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.838809 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.850829 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.864875 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.885614 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.896073 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.896330 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.896421 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.896516 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.896640 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.901518 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.919689 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.932054 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.936279 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.955442 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.968732 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.984445 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.996560 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.999480 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.999512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.999521 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.999537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.999546 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.009084 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.021342 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.031859 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.058037 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.071907 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.087258 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.101312 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.102977 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.103008 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.103019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.103039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.103049 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.115708 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.207112 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.207169 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.207180 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.207204 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.207219 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.294270 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:18:06.009075507 +0000 UTC Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.309731 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.309773 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.309811 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.309834 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.309847 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.323354 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.323394 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.323481 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:12 crc kubenswrapper[5008]: E0129 15:28:12.323606 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:12 crc kubenswrapper[5008]: E0129 15:28:12.324052 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:12 crc kubenswrapper[5008]: E0129 15:28:12.324138 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.413934 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.413988 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.413999 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.414019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.414031 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.516742 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.516808 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.516821 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.516839 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.516851 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.619581 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.619654 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.619674 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.619701 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.619722 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.640849 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" event={"ID":"4f5a0b69-5edd-467c-a822-093f1689df1d","Type":"ContainerStarted","Data":"6d96a41832f35a1dded0a118e669f305e267d9975e28965d27e7226d5b16e279"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.724316 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.724383 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.724402 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.724428 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.724447 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.828092 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.828158 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.828167 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.828184 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.828194 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.930630 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.930663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.930672 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.930689 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.930699 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.033322 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.033370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.033383 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.033400 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.033413 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.125338 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kkc6c"] Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.126174 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: E0129 15:28:13.126291 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.136039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.136089 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.136104 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.136124 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.136139 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.141959 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.158382 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.174930 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.187107 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.198384 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.198437 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4fv\" (UniqueName: \"kubernetes.io/projected/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-kube-api-access-tl4fv\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.199429 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.216671 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.234932 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.238916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.238947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.238955 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.238971 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.238980 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.255533 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.273486 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.285262 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.295229 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:31:07.932536838 +0000 UTC Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.298994 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.299522 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.299559 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl4fv\" (UniqueName: \"kubernetes.io/projected/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-kube-api-access-tl4fv\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: E0129 15:28:13.299804 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:13 crc kubenswrapper[5008]: E0129 15:28:13.299964 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:13.799932995 +0000 UTC m=+37.472787242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.317843 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.318138 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl4fv\" (UniqueName: \"kubernetes.io/projected/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-kube-api-access-tl4fv\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.339738 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.341008 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.341046 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.341056 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.341068 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.341077 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.353030 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.364284 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.373986 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.386707 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.443390 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.443443 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.443458 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.443482 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.443499 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.545968 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.546043 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.546066 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.546095 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.546116 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.649532 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.649570 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.649580 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.649596 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.649608 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.650340 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/0.log" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.655681 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.655871 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.659271 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" event={"ID":"4f5a0b69-5edd-467c-a822-093f1689df1d","Type":"ContainerStarted","Data":"6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.682365 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.696931 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.717651 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.737069 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.747771 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.754270 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.754319 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.754336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.754363 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.754387 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.767072 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.782015 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.798069 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.805149 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: E0129 15:28:13.805432 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:13 crc kubenswrapper[5008]: E0129 15:28:13.805530 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.805498765 +0000 UTC m=+38.478353002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.827231 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.842400 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856134 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856661 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856713 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856723 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856742 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856755 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.869071 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.889279 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.917015 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.938521 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.956127 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.958824 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.958866 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.958875 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.958897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.958907 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.973347 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.061134 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.061172 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.061182 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.061194 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.061203 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.114204 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.114348 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.114389 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:30.114362349 +0000 UTC m=+53.787216586 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.114456 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.114513 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:30.114497362 +0000 UTC m=+53.787351659 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.163027 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.163070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.163080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.163095 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.163107 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.215767 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.215846 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.215873 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.215893 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.215944 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:30.215922562 +0000 UTC m=+53.888776809 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216015 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216033 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216044 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216087 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:30.216076346 +0000 UTC m=+53.888930583 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216088 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216127 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216141 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216211 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:30.216191209 +0000 UTC m=+53.889045446 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.265465 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.265503 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.265512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.265525 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.265534 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.295939 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 09:02:30.440606339 +0000 UTC Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.323664 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.323680 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.323852 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.323939 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.324062 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.324145 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.351970 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.352034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.352044 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.352058 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.352069 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.363979 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.367592 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.367638 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.367647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.367663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.367674 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.381823 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.386034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.386078 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.386089 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.386106 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.386117 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.398810 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.401956 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.401988 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.401996 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.402010 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.402020 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.412246 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.415260 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.415283 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.415293 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.415308 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.415319 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.427417 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.427559 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.429020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.429062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.429076 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.429093 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.429104 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.531094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.531143 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.531155 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.531169 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.531180 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.633887 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.633923 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.633932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.633945 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.633963 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.664233 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/1.log" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.664907 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/0.log" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.668067 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8" exitCode=1 Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.668107 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.668159 5008 scope.go:117] "RemoveContainer" containerID="6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.668988 5008 scope.go:117] "RemoveContainer" containerID="7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.669150 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.669636 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" event={"ID":"4f5a0b69-5edd-467c-a822-093f1689df1d","Type":"ContainerStarted","Data":"98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.683729 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.695827 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.714258 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.724300 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.733103 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.737256 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.737283 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.737295 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.737310 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.737321 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.751358 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.762749 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.777636 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.788255 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.801109 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.811901 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.820946 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.821136 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.821207 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:16.821187472 +0000 UTC m=+40.494041709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.822238 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.833205 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.839975 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.840028 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.840041 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.840061 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.840074 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.846903 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.857206 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.866741 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.886704 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.900293 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.915067 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.935082 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.941851 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.941900 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.941914 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.941931 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.941945 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.947715 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.960553 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.973131 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.994058 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.007153 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.029966 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.045275 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.045542 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.045637 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.045732 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.045834 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:15Z","lastTransitionTime":"2026-01-29T15:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.063590 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.083354 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.109988 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.123433 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.136960 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.147672 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.147723 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.147744 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.147761 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.147799 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:15Z","lastTransitionTime":"2026-01-29T15:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.151821 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.163228 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.177690 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.250574 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.250658 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.250683 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.250715 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.250742 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:15Z","lastTransitionTime":"2026-01-29T15:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.296504 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:34:41.447143995 +0000 UTC Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.323357 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:15 crc kubenswrapper[5008]: E0129 15:28:15.323754 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.818291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.818378 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.818876 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.818920 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.819006 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:15Z","lastTransitionTime":"2026-01-29T15:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.822267 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/1.log" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.921429 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.921489 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.921513 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.921538 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.921554 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:15Z","lastTransitionTime":"2026-01-29T15:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.024223 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.024303 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.024326 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.024356 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.024379 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.127669 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.127726 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.127742 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.127761 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.127774 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.230757 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.230836 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.230851 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.230877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.230894 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.297358 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:57:28.35370186 +0000 UTC Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.323338 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.323405 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.323421 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:16 crc kubenswrapper[5008]: E0129 15:28:16.323549 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:16 crc kubenswrapper[5008]: E0129 15:28:16.324034 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:16 crc kubenswrapper[5008]: E0129 15:28:16.324157 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.333555 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.333604 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.333614 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.333633 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.333646 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.437538 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.437582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.437591 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.437610 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.437621 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.541185 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.541240 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.541254 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.541278 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.541298 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.644639 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.644697 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.644714 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.644736 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.644753 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.747986 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.748055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.748074 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.748106 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.748123 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.843141 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:16 crc kubenswrapper[5008]: E0129 15:28:16.843340 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:16 crc kubenswrapper[5008]: E0129 15:28:16.843421 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:20.843397432 +0000 UTC m=+44.516251669 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.850971 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.851026 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.851040 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.851059 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.851073 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.954650 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.954692 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.954701 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.954719 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.954733 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.057897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.057940 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.057953 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.057973 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.057983 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.161452 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.161512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.161525 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.161543 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.161554 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.263655 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.263716 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.263734 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.263756 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.263768 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.298267 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:48:20.285726841 +0000 UTC Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.323732 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:17 crc kubenswrapper[5008]: E0129 15:28:17.323904 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.336714 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.350191 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.364646 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.366135 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.366178 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.366190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.366207 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.366219 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.376713 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.412280 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.428420 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.445291 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.459087 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.470440 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.470490 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.470531 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.470551 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.470563 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.472101 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.485161 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.497063 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.509389 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.523680 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.535388 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.547006 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.568548 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.573153 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.573195 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.573207 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.573225 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.573238 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.580878 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.675362 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.675589 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.675693 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.675768 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.675849 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.778441 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.778674 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.778760 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.778889 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.778973 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.882494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.882537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.882552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.882573 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.882588 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.986323 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.986687 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.986727 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.986755 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.986767 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.090661 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.090732 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.090751 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.090778 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.090838 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.195896 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.195948 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.195965 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.195990 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.196010 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.298501 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:50:45.45588741 +0000 UTC Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.300024 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.300109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.300139 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.300171 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.300199 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.323417 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:18 crc kubenswrapper[5008]: E0129 15:28:18.323623 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.323651 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.323723 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:18 crc kubenswrapper[5008]: E0129 15:28:18.323969 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:18 crc kubenswrapper[5008]: E0129 15:28:18.324324 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.324895 5008 scope.go:117] "RemoveContainer" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.404097 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.404576 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.404590 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.404613 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.404638 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.507115 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.507161 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.507171 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.507187 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.507198 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.609995 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.610046 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.610058 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.610271 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.610281 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.712737 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.712812 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.712837 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.712858 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.712872 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.815250 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.815290 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.815300 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.815317 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.815328 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.839034 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.840688 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.841707 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.857251 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.874286 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.907274 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.917963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.918025 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.918043 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.918068 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.918085 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.922260 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.939176 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.957032 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.979071 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.001126 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020251 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020305 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020322 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020343 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020360 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020385 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.035294 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.049666 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.064438 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.078502 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.091365 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.111564 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.123165 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.123206 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.123216 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.123232 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.123242 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.125981 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.138693 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.226137 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.226195 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.226207 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.226224 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.226235 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.299115 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:46:37.392876664 +0000 UTC Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.323864 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:19 crc kubenswrapper[5008]: E0129 15:28:19.324318 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.328480 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.328561 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.328582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.328606 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.328619 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.431049 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.431081 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.431092 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.431105 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.431114 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.533481 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.533526 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.533537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.533551 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.533562 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.635572 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.635609 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.635618 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.635631 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.635640 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.738704 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.738774 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.738823 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.738842 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.738853 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.841461 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.841499 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.841507 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.841523 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.841533 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.944729 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.944834 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.944846 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.944862 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.944874 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.047031 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.047108 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.047126 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.047153 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.047170 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.149867 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.149916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.149926 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.149943 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.149953 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.252559 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.252601 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.252647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.252663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.252674 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.299835 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:34:46.492243769 +0000 UTC Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.323399 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.323481 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.323569 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:20 crc kubenswrapper[5008]: E0129 15:28:20.323639 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:20 crc kubenswrapper[5008]: E0129 15:28:20.323740 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:20 crc kubenswrapper[5008]: E0129 15:28:20.323883 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.355038 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.355094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.355111 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.355138 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.355153 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.458378 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.458649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.458819 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.458938 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.459054 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.562512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.562588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.562613 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.562683 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.562708 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.665084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.665128 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.665140 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.665156 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.665166 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.766884 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.767183 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.767309 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.767405 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.767505 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.869494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.869548 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.869560 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.869579 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.869589 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.886625 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:20 crc kubenswrapper[5008]: E0129 15:28:20.887074 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:20 crc kubenswrapper[5008]: E0129 15:28:20.887314 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:28.887284018 +0000 UTC m=+52.560138285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.972925 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.972988 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.973002 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.973019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.973031 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.075769 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.075957 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.076002 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.076035 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.076059 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.178919 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.178980 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.179000 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.179026 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.179043 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.281887 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.281936 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.281947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.281966 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.281977 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.300015 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:13:50.227555842 +0000 UTC Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.323177 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:21 crc kubenswrapper[5008]: E0129 15:28:21.323422 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.384855 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.384910 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.384921 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.384939 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.384952 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.487009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.487067 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.487080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.487098 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.487111 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.593158 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.593203 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.593213 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.593232 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.593249 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.696392 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.696460 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.696479 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.696500 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.696512 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.800978 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.801044 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.801065 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.801098 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.801118 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.903632 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.903699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.903728 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.903756 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.903821 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.006697 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.006737 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.006754 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.006768 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.006776 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.110156 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.110261 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.110285 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.110317 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.110340 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.229879 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.229939 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.229951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.229969 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.229982 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.301139 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:59:17.632877811 +0000 UTC Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.323682 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.323744 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:22 crc kubenswrapper[5008]: E0129 15:28:22.323900 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.323708 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:22 crc kubenswrapper[5008]: E0129 15:28:22.324071 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:22 crc kubenswrapper[5008]: E0129 15:28:22.324154 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.332024 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.332079 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.332094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.332139 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.332154 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.434311 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.434345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.434354 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.434367 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.434376 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.536978 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.537055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.537078 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.537103 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.537117 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.640146 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.640181 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.640190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.640205 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.640218 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.742907 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.742982 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.743006 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.743033 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.743049 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.846280 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.846336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.846362 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.846390 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.846411 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.949718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.949815 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.949834 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.949856 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.949872 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.052912 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.053027 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.053053 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.053125 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.053171 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.156272 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.156320 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.156333 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.156351 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.156363 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.258282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.258344 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.258357 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.258375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.258387 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.302360 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:08:58.260944731 +0000 UTC Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.323285 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:23 crc kubenswrapper[5008]: E0129 15:28:23.323459 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.361623 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.361682 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.361703 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.361733 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.361756 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.465305 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.465358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.465371 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.465390 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.465402 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.568759 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.568989 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.569020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.569054 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.569077 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.672285 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.672342 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.672361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.672384 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.672401 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.775097 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.775201 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.775224 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.775255 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.775276 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.877881 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.877924 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.877934 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.877950 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.877961 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.981415 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.981513 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.981531 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.981564 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.981578 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.085483 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.085559 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.085571 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.085598 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.085610 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.188723 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.188824 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.188843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.188872 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.188893 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.291502 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.291542 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.291552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.291568 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.291576 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.303237 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:23:57.008249696 +0000 UTC Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.323731 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.323818 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.323858 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.323984 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.324115 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.324228 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.395070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.395139 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.395153 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.395169 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.395182 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.498151 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.498204 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.498218 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.498236 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.498249 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.600185 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.600258 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.600282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.600314 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.600336 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.613830 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.613881 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.613896 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.613915 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.613927 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.632602 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.636448 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.636488 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.636504 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.636523 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.636538 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.652376 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.676642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.676696 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.676705 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.676720 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.676729 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.691527 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.696508 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.696550 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.696558 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.696573 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.696584 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.714898 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.718297 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.718334 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.718346 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.718364 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.718376 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.730237 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.730386 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.732056 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.732112 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.732124 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.732142 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.732154 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.835264 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.835313 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.835330 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.835353 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.835371 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.938987 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.939042 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.939060 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.939084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.939100 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.041551 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.041617 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.041634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.041673 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.041712 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.143898 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.143984 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.144012 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.144047 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.144071 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.247207 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.247250 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.247261 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.247277 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.247289 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.304149 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:19:25.210163067 +0000 UTC Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.323550 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:25 crc kubenswrapper[5008]: E0129 15:28:25.323730 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.352281 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.352321 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.352332 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.352349 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.352361 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.455947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.456020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.456044 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.456074 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.456116 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.559008 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.559081 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.559103 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.559133 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.559155 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.661862 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.661912 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.661929 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.661954 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.661972 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.763837 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.763877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.763888 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.763904 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.763914 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.868125 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.868178 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.868190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.868212 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.868226 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.971522 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.971600 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.971621 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.971654 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.971675 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.074451 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.074494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.074506 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.074525 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.074541 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.177904 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.177979 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.177997 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.178023 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.178043 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.282067 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.282132 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.282152 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.282184 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.282207 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.305300 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:22:00.098198958 +0000 UTC Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.322692 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.322855 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:26 crc kubenswrapper[5008]: E0129 15:28:26.322866 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.323047 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:26 crc kubenswrapper[5008]: E0129 15:28:26.323205 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:26 crc kubenswrapper[5008]: E0129 15:28:26.323354 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.385305 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.385347 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.385359 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.385380 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.385392 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.489452 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.489521 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.489534 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.489561 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.489585 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.592589 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.592664 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.592680 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.592709 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.592726 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.695993 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.696057 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.696080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.696117 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.696140 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.800688 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.800763 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.800829 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.800856 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.800873 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.903558 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.903623 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.903640 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.903664 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.903682 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.982530 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.996615 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.009751 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.014540 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.014614 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.014631 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.014657 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.014675 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.030311 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.055193 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.071016 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.089774 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.108555 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.117991 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.118048 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.118065 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.118090 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.118107 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.123334 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.159608 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.181017 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.195086 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.210869 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.221162 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.221232 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.221252 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.221277 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.221297 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.227934 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.246942 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.260644 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.276526 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.293879 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.304920 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.305802 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:38:20.318054668 +0000 UTC Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.322697 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:27 crc kubenswrapper[5008]: E0129 15:28:27.322877 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.325682 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.325775 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.325845 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.325868 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.325928 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.339266 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.360545 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.381144 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.394973 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.409212 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.428197 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.428253 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.428269 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.428293 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.428311 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.446642 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.459703 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.472421 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.484502 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.499344 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.512051 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.530960 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.531005 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.531020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.531040 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.531056 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.531143 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.544332 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.559485 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.574145 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.587209 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.598123 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.608524 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.633430 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.633463 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.633472 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.633485 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.633494 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.737046 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.737152 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.737175 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.737204 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.737387 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.839996 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.840055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.840073 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.840094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.840111 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.942375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.942554 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.942584 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.942686 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.942753 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.045921 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.045990 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.046007 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.046034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.046051 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.124197 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.136673 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.149932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.149965 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.149973 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.149987 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.149998 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.156551 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.174775 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.194432 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.207811 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.221196 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.251298 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.252172 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.252201 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.252209 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.252224 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.252233 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.265832 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.281604 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.305617 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.305883 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:45:53.306872717 +0000 UTC Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.316508 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.323593 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:28 crc kubenswrapper[5008]: E0129 15:28:28.323731 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.323609 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.323600 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:28 crc kubenswrapper[5008]: E0129 15:28:28.323870 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:28 crc kubenswrapper[5008]: E0129 15:28:28.324009 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.331430 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.354935 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.355006 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.355023 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.355047 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.355069 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.358935 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.374920 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.391201 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.406508 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.418747 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.431009 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.458211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.458269 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.458284 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.458305 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.458318 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.561327 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.561418 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.561451 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.561483 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.561508 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.664653 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.664698 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.664709 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.664723 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.664731 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.766992 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.767034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.767045 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.767062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.767073 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.869755 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.869838 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.869852 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.869872 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.869890 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.894220 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:28 crc kubenswrapper[5008]: E0129 15:28:28.894424 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:28 crc kubenswrapper[5008]: E0129 15:28:28.894531 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:44.894502991 +0000 UTC m=+68.567357258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.976166 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.976261 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.976283 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.976309 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.976328 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.080160 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.080215 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.080232 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.080257 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.080275 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.183527 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.183632 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.183681 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.183704 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.183727 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.287258 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.287345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.287368 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.287397 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.287430 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.306398 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:23:45.349611014 +0000 UTC Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.322742 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:29 crc kubenswrapper[5008]: E0129 15:28:29.322956 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.323912 5008 scope.go:117] "RemoveContainer" containerID="7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.360392 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.373465 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.385692 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.390563 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.390647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.390663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.390684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.390701 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.399718 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.418746 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.432906 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.448760 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.463606 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.477153 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.494907 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.494951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.494959 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.494972 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.494983 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.499113 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.509710 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.522080 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.533589 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.552885 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.567533 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.581556 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.596775 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.596832 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.596845 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.596861 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.596873 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.598991 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.614550 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.681868 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.698820 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.698867 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.698879 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.698899 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.698913 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.801796 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.801843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.801854 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.801870 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.801881 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.904393 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.904445 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.904457 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.904473 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.904485 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.007401 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.007440 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.007451 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.007467 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.007479 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.109587 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.109634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.109647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.109683 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.109698 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.208828 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.208979 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:29:02.208951102 +0000 UTC m=+85.881805339 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.209052 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.209201 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.209264 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:02.20924651 +0000 UTC m=+85.882100747 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.214814 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.214856 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.214867 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.214884 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.214894 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.307630 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:38:52.014474638 +0000 UTC Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.310166 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.310229 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.310270 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310345 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310346 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310414 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310432 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310394 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:02.310381301 +0000 UTC m=+85.983235548 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310568 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:02.310536075 +0000 UTC m=+85.983390322 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310597 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310671 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310699 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310843 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:02.310776282 +0000 UTC m=+85.983630559 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.317588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.317659 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.317684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.317717 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.317744 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.322669 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.322695 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.322680 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.322830 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.323003 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.323094 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.420744 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.420825 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.420843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.420865 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.420882 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.523098 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.523132 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.523141 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.523156 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.523165 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.625932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.625996 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.626018 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.626045 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.626066 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.728918 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.729014 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.729039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.729073 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.729100 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.832011 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.832071 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.832085 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.832102 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.832114 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.886288 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/1.log" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.889465 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.889910 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.904235 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.918973 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.930561 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.934256 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.934282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.934294 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.934310 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.934323 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.949933 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.962436 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.976217 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.988841 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.000961 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.016710 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.036893 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.036961 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.036979 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.037002 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.037022 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.051707 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.071610 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.090099 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.104644 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.117750 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.139288 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.139494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.139528 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.139563 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.139588 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.142419 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.157459 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.170584 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.186262 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.242413 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.242448 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.242459 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.242475 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.242484 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.308543 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:08:56.951149413 +0000 UTC Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.323109 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:31 crc kubenswrapper[5008]: E0129 15:28:31.323456 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.344921 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.344981 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.344999 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.345025 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.345043 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.447963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.448040 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.448063 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.448091 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.448112 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.550932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.550990 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.551008 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.551032 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.551049 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.654512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.654549 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.654571 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.654590 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.654603 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.758154 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.758236 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.758273 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.758303 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.758324 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.861912 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.861978 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.862002 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.862029 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.862046 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.896184 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/2.log" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.897600 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/1.log" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.901920 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" exitCode=1 Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.901962 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.902004 5008 scope.go:117] "RemoveContainer" containerID="7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.904057 5008 scope.go:117] "RemoveContainer" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" Jan 29 15:28:31 crc kubenswrapper[5008]: E0129 15:28:31.904385 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.926897 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.960370 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.965486 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.965585 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.965644 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.965670 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.965731 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.983234 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.007888 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.028039 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.051697 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.068715 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.068821 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.068846 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.068872 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.068889 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.073975 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.090654 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.112907 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.133648 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.156503 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.172007 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.172086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.172109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.172133 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.172151 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.177184 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.194714 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.218495 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.238275 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.252858 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.271912 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.274969 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.275046 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.275125 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.275162 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.275185 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.286220 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.309601 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:23:04.397169389 +0000 UTC Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.323129 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.323125 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:32 crc kubenswrapper[5008]: E0129 15:28:32.323318 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.323152 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:32 crc kubenswrapper[5008]: E0129 15:28:32.323389 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:32 crc kubenswrapper[5008]: E0129 15:28:32.323515 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.377718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.377802 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.377820 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.377839 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.377853 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.479916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.479974 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.479991 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.480012 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.480032 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.582649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.582684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.582703 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.582721 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.582731 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.685334 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.685392 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.685401 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.685415 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.685425 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.788258 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.788329 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.788351 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.788384 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.788406 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.890986 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.891056 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.891080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.891108 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.891131 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.914769 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/2.log" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.920113 5008 scope.go:117] "RemoveContainer" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" Jan 29 15:28:32 crc kubenswrapper[5008]: E0129 15:28:32.920543 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.939038 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.974209 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.993816 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.993894 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.993918 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.993947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.993968 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.996751 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.018848 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.038283 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.059322 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.080023 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.096268 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.098891 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.098964 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.098984 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.099023 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.099062 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.116966 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.136031 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.154350 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.172470 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.202166 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.202230 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.202243 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.202261 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.202276 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.203264 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.235289 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.257083 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.275965 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.294237 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.305123 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.305168 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.305182 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.305200 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.305215 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.306710 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.310725 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:42:14.424102166 +0000 UTC Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.323394 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:33 crc kubenswrapper[5008]: E0129 15:28:33.323567 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.410698 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.411095 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.411118 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.411142 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.411157 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.515532 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.515567 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.515575 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.515590 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.515599 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.618370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.618449 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.618463 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.618480 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.618492 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.721199 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.721302 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.721371 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.721402 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.721424 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.823596 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.823699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.823718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.823748 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.823767 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.926258 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.926339 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.926358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.926382 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.926401 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.029878 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.030304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.030484 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.030676 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.030850 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.135029 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.135078 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.135096 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.135119 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.135136 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.238843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.238913 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.238937 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.238966 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.238988 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.310885 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:32:41.172364017 +0000 UTC Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.323517 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:34 crc kubenswrapper[5008]: E0129 15:28:34.323726 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.323969 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:34 crc kubenswrapper[5008]: E0129 15:28:34.324063 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.324125 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:34 crc kubenswrapper[5008]: E0129 15:28:34.324267 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.341612 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.341655 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.341666 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.341695 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.341720 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.447556 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.447800 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.447810 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.447828 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.447851 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.551855 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.551927 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.551944 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.551969 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.551986 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.654545 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.654588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.654600 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.654618 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.654632 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.758137 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.758190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.758206 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.758230 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.758248 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.861371 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.861427 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.861443 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.861466 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.861483 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.964425 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.964508 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.964533 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.964562 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.964582 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.022979 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.023053 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.023076 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.023107 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.023130 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.045056 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.050912 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.050963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.050982 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.051005 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.051022 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.072007 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.076883 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.076963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.076987 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.077018 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.077042 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.098086 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.104039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.104114 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.104140 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.104171 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.104194 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.120050 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.124606 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.124642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.124654 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.124672 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.124686 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.141359 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.141525 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.143664 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.143725 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.143742 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.143768 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.143822 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.246985 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.247067 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.247089 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.247118 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.247139 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.311630 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:21:28.490023277 +0000 UTC Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.323154 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.323359 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.350213 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.350246 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.350255 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.350269 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.350279 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.453569 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.453638 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.453652 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.453678 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.453697 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.557023 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.557089 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.557106 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.557129 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.557146 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.660524 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.660577 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.660595 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.660616 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.660630 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.763573 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.763623 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.763634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.763651 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.763665 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.866180 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.866263 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.866281 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.866311 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.866329 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.969217 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.969299 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.969318 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.969348 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.969369 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.072536 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.072578 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.072591 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.072608 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.072620 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.175853 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.175900 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.175918 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.175944 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.175960 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.278607 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.278681 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.278698 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.278721 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.278736 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.311944 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:48:25.0020584 +0000 UTC Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.323607 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.323686 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.323623 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:36 crc kubenswrapper[5008]: E0129 15:28:36.323825 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:36 crc kubenswrapper[5008]: E0129 15:28:36.323934 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:36 crc kubenswrapper[5008]: E0129 15:28:36.324034 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.381989 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.382031 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.382043 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.382059 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.382083 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.485999 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.486038 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.486047 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.486062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.486071 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.588951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.588989 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.588998 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.589013 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.589022 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.691756 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.691837 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.691855 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.691883 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.691919 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.794712 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.794756 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.794768 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.794806 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.794819 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.897911 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.897977 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.897995 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.898019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.898036 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.000894 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.001009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.001037 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.001112 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.001142 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.103505 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.103570 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.103584 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.103627 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.103642 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.207315 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.207379 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.207396 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.207420 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.207437 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.310453 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.310521 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.310538 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.310565 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.310583 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.312978 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:56:41.055200248 +0000 UTC Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.323422 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:37 crc kubenswrapper[5008]: E0129 15:28:37.323894 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.360345 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.374001 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.387979 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.406119 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.415109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.415211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.415231 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.415299 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.415322 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.421356 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.435654 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.451661 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.469661 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.487950 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.504198 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.517941 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.517977 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.517988 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.518004 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.518015 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.518677 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.532666 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.546946 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.575902 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.594800 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.616037 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.620208 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.620279 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.620293 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.620311 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.620325 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.630382 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.643918 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.723963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.724017 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.724036 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.724059 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.724077 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.827597 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.827904 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.827926 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.827951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.827970 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.931547 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.931628 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.931647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.931673 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.931689 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.035193 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.035270 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.035295 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.035327 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.035353 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.138386 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.138458 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.138471 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.138488 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.138500 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.241312 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.241416 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.241438 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.241459 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.241473 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.313892 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:24:27.291243988 +0000 UTC Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.323340 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.323392 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.323513 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:38 crc kubenswrapper[5008]: E0129 15:28:38.323510 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:38 crc kubenswrapper[5008]: E0129 15:28:38.323648 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:38 crc kubenswrapper[5008]: E0129 15:28:38.323754 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.344466 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.344531 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.344555 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.344587 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.344611 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.447991 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.448124 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.448137 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.448155 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.448168 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.551306 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.551409 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.551434 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.551466 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.551488 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.655256 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.655980 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.656022 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.656041 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.656052 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.760389 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.760440 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.760450 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.760467 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.760479 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.862471 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.862711 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.862734 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.862752 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.862764 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.965734 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.965843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.965861 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.965880 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.965895 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.068564 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.068622 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.068637 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.068658 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.068672 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.172102 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.172156 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.172177 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.172201 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.172219 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.274539 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.274612 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.274621 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.274633 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.274642 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.314578 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 04:34:25.98226525 +0000 UTC Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.323304 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:39 crc kubenswrapper[5008]: E0129 15:28:39.323461 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.377011 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.377081 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.377100 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.377126 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.377144 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.480216 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.480282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.480300 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.480326 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.480344 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.582528 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.582606 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.582639 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.582674 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.582700 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.685427 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.685485 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.685495 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.685515 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.685530 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.788653 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.788739 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.788751 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.788769 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.788802 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.891696 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.891766 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.891820 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.891849 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.891871 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.993981 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.994056 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.994075 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.994103 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.994121 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.097583 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.097657 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.097675 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.097699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.097718 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.201340 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.201424 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.201444 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.201473 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.201492 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.304437 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.304499 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.304516 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.304539 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.304556 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.315161 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:21:33.219017574 +0000 UTC Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.322696 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.322825 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.322852 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:40 crc kubenswrapper[5008]: E0129 15:28:40.322958 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:40 crc kubenswrapper[5008]: E0129 15:28:40.323276 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:40 crc kubenswrapper[5008]: E0129 15:28:40.323276 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.408074 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.408116 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.408125 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.408140 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.408152 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.511239 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.511320 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.511334 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.511350 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.511362 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.614426 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.614488 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.614506 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.614540 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.614556 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.717020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.717072 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.717084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.717107 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.717118 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.820530 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.820574 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.820586 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.820604 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.820616 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.922900 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.922974 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.923017 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.923049 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.923073 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.025588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.025628 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.025636 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.025651 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.025661 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.133339 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.134052 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.134100 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.134129 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.134149 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.237175 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.237249 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.237267 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.237293 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.237313 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.315557 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:45:36.612504547 +0000 UTC Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.323343 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:41 crc kubenswrapper[5008]: E0129 15:28:41.323907 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.340390 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.340511 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.340534 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.340607 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.340624 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.443532 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.443607 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.443633 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.443665 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.443692 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.546995 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.547055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.547076 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.547099 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.547114 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.650308 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.650357 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.650368 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.650387 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.650401 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.753877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.753932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.753943 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.753964 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.753976 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.856840 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.856884 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.856897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.856916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.856927 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.960015 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.960081 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.960094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.960115 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.960129 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.063315 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.063386 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.063398 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.063419 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.063430 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.166525 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.166594 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.166640 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.166744 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.166768 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.270304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.270377 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.270400 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.270438 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.270457 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.316729 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:02:46.197367575 +0000 UTC Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.323129 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.323229 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:42 crc kubenswrapper[5008]: E0129 15:28:42.323294 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.323311 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:42 crc kubenswrapper[5008]: E0129 15:28:42.323440 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:42 crc kubenswrapper[5008]: E0129 15:28:42.323556 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.373623 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.373669 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.373707 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.373725 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.373738 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.476713 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.476772 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.476810 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.476869 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.476884 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.579981 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.580060 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.580076 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.580099 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.580118 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.682409 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.682450 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.682459 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.682472 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.682483 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.785279 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.785333 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.785345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.785367 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.785380 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.888589 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.888663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.888680 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.888710 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.888725 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.991057 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.991106 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.991121 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.991144 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.991159 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.094324 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.094441 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.094468 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.094502 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.094526 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.198086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.198147 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.198162 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.198187 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.198202 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.300737 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.300852 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.300875 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.300905 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.300928 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.317384 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:38:02.658613977 +0000 UTC Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.322676 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:43 crc kubenswrapper[5008]: E0129 15:28:43.323076 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.324647 5008 scope.go:117] "RemoveContainer" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" Jan 29 15:28:43 crc kubenswrapper[5008]: E0129 15:28:43.325007 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.342017 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.404170 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.404210 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.404219 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.404233 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.404243 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.507691 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.508162 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.508332 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.508474 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.508625 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.611693 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.611742 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.611754 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.611769 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.611799 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.714808 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.714851 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.715149 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.715168 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.715180 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.818336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.818410 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.818434 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.818465 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.818486 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.921527 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.921575 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.921588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.921608 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.921620 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.024291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.024340 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.024358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.024383 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.024403 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.127432 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.127513 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.127537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.127569 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.127592 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.230292 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.230340 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.230353 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.230370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.230383 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.318455 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:14:01.955995801 +0000 UTC Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.322758 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.322855 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:44 crc kubenswrapper[5008]: E0129 15:28:44.322891 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:44 crc kubenswrapper[5008]: E0129 15:28:44.322988 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.323052 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:44 crc kubenswrapper[5008]: E0129 15:28:44.323107 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.332481 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.332506 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.332517 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.332530 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.332541 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.435025 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.435068 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.435086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.435107 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.435125 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.538540 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.538576 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.538586 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.538599 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.538609 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.641316 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.641351 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.641361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.641373 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.641382 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.743974 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.744012 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.744021 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.744035 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.744045 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.846510 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.846555 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.846568 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.846586 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.846597 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.949011 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.949054 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.949064 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.949079 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.949088 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.975525 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:44 crc kubenswrapper[5008]: E0129 15:28:44.975638 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:44 crc kubenswrapper[5008]: E0129 15:28:44.975699 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:16.975683055 +0000 UTC m=+100.648537292 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.052289 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.052329 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.052341 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.052358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.052369 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.154942 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.154984 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.154996 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.155017 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.155035 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.257210 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.257273 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.257286 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.257315 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.257328 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.319338 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:20:34.713155412 +0000 UTC Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.322836 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.323101 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.359302 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.359343 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.359352 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.359367 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.359376 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.461949 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.461998 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.462009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.462024 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.462035 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.466897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.466951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.466970 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.466993 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.467008 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.485432 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.490109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.490164 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.490181 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.490205 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.490225 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.507073 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.511498 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.511536 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.511552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.511569 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.511582 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.527226 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.537552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.537604 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.537615 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.537631 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.537641 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.554129 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.557971 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.558026 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.558043 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.558071 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.558087 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.570934 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.571087 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.572338 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.572365 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.572375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.572392 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.572404 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.674743 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.674891 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.675056 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.675084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.675100 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.776947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.777011 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.777029 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.777052 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.777071 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.879360 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.879404 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.879416 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.879433 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.879446 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.981159 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.981230 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.981253 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.981284 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.981305 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.083233 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.083291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.083308 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.083333 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.083349 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.185905 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.185988 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.186006 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.186036 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.186054 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.288492 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.288519 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.288528 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.288541 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.288552 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.320484 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 00:29:14.437186072 +0000 UTC Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.323032 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.323087 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.323050 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:46 crc kubenswrapper[5008]: E0129 15:28:46.323249 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:46 crc kubenswrapper[5008]: E0129 15:28:46.323317 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:46 crc kubenswrapper[5008]: E0129 15:28:46.323434 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.392406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.392450 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.392460 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.392478 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.392489 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.495047 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.495095 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.495109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.495126 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.495137 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.598134 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.598184 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.598198 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.598217 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.598231 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.701389 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.701436 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.701447 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.701465 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.701478 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.803301 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.803364 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.803384 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.803416 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.803431 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.906659 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.906720 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.906738 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.906762 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.906778 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.009069 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.009106 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.009117 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.009134 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.009148 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.112317 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.112361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.112373 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.112391 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.112402 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.214598 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.214632 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.214643 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.214658 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.214668 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.317019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.317080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.317107 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.317132 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.317144 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.321259 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:12:12.405440458 +0000 UTC Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.323642 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:47 crc kubenswrapper[5008]: E0129 15:28:47.323800 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.336696 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.347720 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.358550 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.372712 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.382659 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.396300 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.410819 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.419724 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.419860 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.419882 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.419909 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.419928 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.428160 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.450135 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.462855 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.479055 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.494466 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.506878 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.522816 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.522853 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.522863 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.522877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.522886 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.525905 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.537712 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.551252 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.565728 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.577157 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.590528 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.624929 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.624964 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.624974 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.624995 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.625005 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.726554 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.726604 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.726617 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.726635 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.726648 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.828986 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.829039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.829049 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.829070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.829082 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.932441 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.932501 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.932558 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.932588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.932620 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.973007 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/0.log" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.973060 5008 generic.go:334] "Generic (PLEG): container finished" podID="cdd8ae23-3f9f-49f8-928d-46dad823fde4" containerID="a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b" exitCode=1 Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.973090 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerDied","Data":"a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.973464 5008 scope.go:117] "RemoveContainer" containerID="a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.988070 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.010681 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.023905 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.036068 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.036403 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.036412 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.036426 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.036436 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.041395 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.065803 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.079557 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.097620 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.112046 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.128353 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.139935 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.139993 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.140015 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.140043 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.140067 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.143538 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.154925 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.165513 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.175320 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.187794 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.198335 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.213963 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.232039 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.242430 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.242473 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.242485 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.242501 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.242513 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.246896 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.258933 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.321680 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:03:08.453426423 +0000 UTC Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.323134 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:48 crc kubenswrapper[5008]: E0129 15:28:48.323473 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.323142 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:48 crc kubenswrapper[5008]: E0129 15:28:48.323591 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.323730 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:48 crc kubenswrapper[5008]: E0129 15:28:48.324077 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.345237 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.345273 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.345285 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.345301 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.345312 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.447059 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.447389 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.447461 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.447523 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.447591 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.549129 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.549167 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.549178 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.549193 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.549204 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.651597 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.651634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.651642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.651656 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.651666 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.754165 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.754202 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.754212 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.754226 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.754235 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.856194 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.856226 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.856234 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.856248 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.856257 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.958918 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.958982 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.959001 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.959027 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.959048 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.978590 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/0.log" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.978648 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerStarted","Data":"af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.993563 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.015385 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.027438 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.039143 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.061538 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.061577 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.061585 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.061601 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.061611 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.103930 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.120163 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.135155 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.147515 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.159399 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.164245 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.164291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.164303 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.164321 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.164331 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.171258 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.180370 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.189964 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.202207 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.216542 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.227776 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.240802 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.263683 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.266281 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.266310 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.266322 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.266345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.266356 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.281494 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.296066 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.322859 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:09:12.800761068 +0000 UTC Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.323097 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:49 crc kubenswrapper[5008]: E0129 15:28:49.323309 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.369019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.369055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.369066 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.369079 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.369089 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.471673 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.471709 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.471721 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.471737 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.471747 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.575211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.575291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.575309 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.575339 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.575358 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.678247 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.678282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.678291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.678304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.678313 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.781278 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.781325 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.781338 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.781353 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.781363 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.884031 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.884084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.884094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.884109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.884120 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.985611 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.985661 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.985672 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.985688 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.985699 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.088221 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.088282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.088295 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.088312 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.088323 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.190200 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.190263 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.190312 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.190329 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.190339 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.293503 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.293541 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.293552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.293567 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.293578 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.323401 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.323473 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:50 crc kubenswrapper[5008]: E0129 15:28:50.323559 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:50 crc kubenswrapper[5008]: E0129 15:28:50.323693 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.323836 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.323897 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:06:41.472838428 +0000 UTC Jan 29 15:28:50 crc kubenswrapper[5008]: E0129 15:28:50.324041 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.397450 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.397491 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.397503 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.397518 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.397529 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.500069 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.500112 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.500123 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.500139 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.500150 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.603003 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.603050 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.603080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.603101 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.603115 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.705641 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.705678 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.705689 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.705703 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.705714 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.809149 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.809189 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.809198 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.809211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.809224 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.912553 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.912649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.912663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.912699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.912716 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.015440 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.015476 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.015487 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.015500 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.015509 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.118134 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.118205 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.118227 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.118258 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.118281 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.221484 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.221547 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.221567 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.221592 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.221610 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.322972 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:51 crc kubenswrapper[5008]: E0129 15:28:51.323249 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324149 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:06:21.485790109 +0000 UTC Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324768 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324879 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324929 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324962 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.427374 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.427403 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.427411 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.427423 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.427431 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.529610 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.529663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.529690 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.529709 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.529723 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.632626 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.632688 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.632705 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.632727 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.632742 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.750358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.750406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.750415 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.750432 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.750444 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.854521 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.854558 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.854566 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.854582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.854592 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.956741 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.956823 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.956834 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.956851 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.956862 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.061024 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.061075 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.061086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.061103 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.061116 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.163306 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.163362 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.163374 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.163395 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.163407 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.266157 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.266253 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.266276 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.266303 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.266320 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.323387 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.323489 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.323511 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:52 crc kubenswrapper[5008]: E0129 15:28:52.323678 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:52 crc kubenswrapper[5008]: E0129 15:28:52.323845 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:52 crc kubenswrapper[5008]: E0129 15:28:52.323901 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.324283 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 10:29:21.671592971 +0000 UTC Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.368859 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.368914 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.368926 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.368941 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.368952 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.471294 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.471349 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.471360 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.471385 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.471401 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.574845 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.574920 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.574940 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.574966 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.574983 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.677864 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.677932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.677952 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.677977 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.677996 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.781741 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.781816 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.781834 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.781861 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.781876 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.884318 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.884375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.884387 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.884404 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.884416 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.987582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.987621 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.987632 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.987649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.987659 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.089955 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.090018 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.090039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.090062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.090078 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.193990 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.194042 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.194053 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.194070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.194083 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.297286 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.297345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.297358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.297381 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.297400 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.323942 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:53 crc kubenswrapper[5008]: E0129 15:28:53.324134 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.324445 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:06:36.783606506 +0000 UTC Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.401453 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.401520 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.401537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.401564 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.401587 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.503852 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.503932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.503951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.503979 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.504003 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.606718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.606760 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.606770 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.606829 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.606840 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.709931 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.710005 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.710031 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.710061 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.710083 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.813386 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.813463 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.813476 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.813494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.813506 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.916445 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.916497 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.916515 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.916535 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.916547 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.019959 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.020034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.020053 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.020078 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.020093 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.123052 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.123121 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.123145 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.123173 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.123199 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.226414 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.226481 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.226497 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.226521 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.226534 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.323045 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.323081 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:54 crc kubenswrapper[5008]: E0129 15:28:54.323160 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.323051 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:54 crc kubenswrapper[5008]: E0129 15:28:54.323258 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:54 crc kubenswrapper[5008]: E0129 15:28:54.323404 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.325108 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:12:21.08656571 +0000 UTC Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.329297 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.329333 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.329344 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.329361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.329374 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.432592 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.432635 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.432643 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.432657 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.432671 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.535651 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.535694 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.535707 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.535724 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.535737 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.637750 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.637831 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.637843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.637862 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.637874 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.740335 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.740463 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.740523 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.740548 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.740569 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.843438 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.843721 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.843897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.844008 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.844110 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.946927 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.946991 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.947010 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.947034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.947051 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.050361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.050423 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.050442 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.050467 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.050486 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.153515 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.153916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.154227 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.154417 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.154738 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.257589 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.257635 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.257647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.257662 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.257674 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.323688 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:55 crc kubenswrapper[5008]: E0129 15:28:55.323886 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.325899 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:24:14.832125552 +0000 UTC Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.360905 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.361034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.361058 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.361088 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.361109 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.464058 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.464143 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.464168 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.464197 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.464220 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.566847 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.566905 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.566914 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.566928 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.566937 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.669730 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.669774 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.669803 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.669818 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.669829 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.772233 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.772266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.772275 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.772289 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.772298 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.875242 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.875301 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.875320 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.875343 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.875360 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.948662 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.948718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.948734 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.948759 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.948839 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: E0129 15:28:55.961888 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.965064 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.965090 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.965100 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.965113 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.965122 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: E0129 15:28:55.978670 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.982426 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.982481 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.982491 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.982507 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.982518 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: E0129 15:28:55.993204 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.996807 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.996874 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.996891 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.996915 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.996932 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.010175 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.014200 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.014282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.014308 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.014340 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.014365 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.030373 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.030499 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.031714 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.031764 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.031775 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.031805 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.031815 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.135191 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.135240 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.135252 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.135269 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.135281 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.237955 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.238005 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.238017 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.238035 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.238048 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.322914 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.322989 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.323065 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.323187 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.323347 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.323717 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.324178 5008 scope.go:117] "RemoveContainer" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.326145 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:20:07.587342446 +0000 UTC Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.343370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.343407 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.343418 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.343433 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.343444 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.454190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.454251 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.454262 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.454330 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.454358 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.556837 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.556885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.556899 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.556916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.556929 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.659565 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.659617 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.659629 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.659648 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.659660 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.762062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.762099 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.762109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.762122 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.762133 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.864145 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.864180 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.864191 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.864208 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.864219 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.966495 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.966534 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.966544 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.966558 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.966567 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.007406 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/2.log" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.009991 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.010373 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.032445 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.044288 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.055433 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068202 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068238 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068249 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068206 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068265 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068424 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.081249 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.090805 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.100634 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.115515 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.130334 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.142862 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.156534 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.167550 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.170257 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.170288 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.170300 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.170316 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.170327 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.183376 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.193217 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.211314 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.225295 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.238105 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.248716 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.258804 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.272646 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.272684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.272692 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.272706 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.272714 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.322711 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:57 crc kubenswrapper[5008]: E0129 15:28:57.322943 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.327105 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:31:41.968963403 +0000 UTC Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.343233 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.355667 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.364613 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.374932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.375183 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.375318 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.375435 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.375543 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.378830 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.393182 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.406585 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.420938 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.439336 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.453253 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.473145 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.477703 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.477736 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.477745 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.477759 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.477769 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.484747 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.496286 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.504240 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.516351 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.532843 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.545460 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.556990 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.570848 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.580374 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.580604 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.580709 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.580776 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.580913 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.586474 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.682513 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.682555 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.682566 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.682582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.682595 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.784617 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.784671 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.784688 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.784711 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.784731 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.887538 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.887888 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.888004 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.888108 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.888245 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.991347 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.991386 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.991398 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.991413 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.991423 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.014688 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/3.log" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.015990 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/2.log" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.020686 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" exitCode=1 Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.020727 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.020763 5008 scope.go:117] "RemoveContainer" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.022113 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:28:58 crc kubenswrapper[5008]: E0129 15:28:58.022500 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.035000 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.052289 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.063499 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.073682 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.093424 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.094110 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.094140 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.094148 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.094162 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.094172 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.104966 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.116812 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.127720 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.139354 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.150490 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.159585 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.170475 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.183234 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196535 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196651 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196685 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196695 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196714 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196726 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.207565 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.219382 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.236392 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:57Z\\\",\\\"message\\\":\\\"rk=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0129 15:28:57.018530 7109 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cert\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.248563 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.261658 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.299518 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.299568 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.299579 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.299595 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.299605 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.323311 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.323355 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.323415 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:58 crc kubenswrapper[5008]: E0129 15:28:58.323451 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:58 crc kubenswrapper[5008]: E0129 15:28:58.323564 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:58 crc kubenswrapper[5008]: E0129 15:28:58.323719 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.328362 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:21:43.056989325 +0000 UTC Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.402588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.402701 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.402713 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.402732 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.402747 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.505304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.505353 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.505370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.505392 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.505412 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.608191 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.608231 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.608241 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.608257 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.608267 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.711059 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.711129 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.711148 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.711174 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.711194 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.814671 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.814736 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.814753 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.814775 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.814822 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.917594 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.917627 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.917635 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.917649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.917657 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.020022 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.020086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.020103 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.020127 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.020143 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.024763 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/3.log" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.027621 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:28:59 crc kubenswrapper[5008]: E0129 15:28:59.027798 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.041238 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.054118 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.093541 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:57Z\\\",\\\"message\\\":\\\"rk=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0129 15:28:57.018530 7109 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cert\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.107289 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.120928 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.123207 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.123289 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.123310 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.123336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.123357 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.134380 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.153704 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.174416 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.188762 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.203987 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.220840 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.225503 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.225554 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.225565 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.225583 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.225596 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.231453 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.243302 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.253958 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.265132 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.276236 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.298386 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.313318 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.322907 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:59 crc kubenswrapper[5008]: E0129 15:28:59.323038 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.327321 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.327378 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.327394 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.327418 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.327434 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.329222 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:30:35.36247741 +0000 UTC Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.333120 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.429502 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.429547 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.429559 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.429578 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.429590 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.533069 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.533135 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.533154 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.533180 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.533198 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.637638 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.637703 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.637727 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.637758 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.637816 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.741070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.741137 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.741156 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.741181 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.741202 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.844824 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.844858 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.844866 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.844880 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.844889 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.947527 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.947574 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.947590 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.947611 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.947629 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.057139 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.057547 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.057563 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.057582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.057595 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.160299 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.160344 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.160352 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.160367 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.160376 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.267536 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.267584 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.267596 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.267612 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.267621 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.323069 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.323198 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.323308 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:00 crc kubenswrapper[5008]: E0129 15:29:00.323339 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:00 crc kubenswrapper[5008]: E0129 15:29:00.323405 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:00 crc kubenswrapper[5008]: E0129 15:29:00.323501 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.330269 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:27:35.831757744 +0000 UTC Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.370441 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.370483 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.370494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.370510 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.370519 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.473813 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.473869 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.473885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.473908 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.473926 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.576596 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.576632 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.576644 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.576663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.576675 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.679620 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.679664 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.679682 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.679699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.679710 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.782453 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.782525 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.782548 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.782579 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.782602 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.884597 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.884638 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.884649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.884665 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.884676 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.987235 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.987299 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.987319 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.987346 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.987365 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.090368 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.090406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.090414 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.090427 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.090437 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.193560 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.193642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.193662 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.193689 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.193707 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.296479 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.296552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.296569 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.296591 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.296606 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.323390 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:01 crc kubenswrapper[5008]: E0129 15:29:01.323554 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.331115 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:42:53.850976977 +0000 UTC Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.399360 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.399405 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.399417 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.399438 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.399451 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.502339 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.502382 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.502393 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.502409 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.502420 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.605046 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.605104 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.605157 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.605184 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.605201 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.707577 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.707639 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.707661 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.707690 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.707710 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.809869 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.809907 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.809917 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.809931 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.809946 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.912899 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.912940 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.912951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.912997 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.913014 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.015897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.015949 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.015962 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.015984 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.015995 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.118598 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.118675 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.118700 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.118731 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.118754 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.220986 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.221029 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.221037 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.221050 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.221059 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.303095 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.303279 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.303432 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.303432 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.303391664 +0000 UTC m=+149.976245961 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.303518 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.303488307 +0000 UTC m=+149.976342554 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.322654 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.322699 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.322654 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.322867 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.322952 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.323089 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.323817 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.323849 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.323857 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.323870 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.323882 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.332164 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:09:44.943932023 +0000 UTC Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.405280 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.405398 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.405476 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405597 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405645 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405741 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.405715221 +0000 UTC m=+150.078569488 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405652 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405768 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405852 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405882 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405850 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405958 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.405934739 +0000 UTC m=+150.078789016 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.406044 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.406016031 +0000 UTC m=+150.078870348 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.427430 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.427515 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.427536 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.427572 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.427594 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.531055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.531104 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.531115 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.531132 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.531142 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.633645 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.633681 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.633691 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.633706 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.633716 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.742652 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.742757 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.742833 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.742872 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.742899 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.846963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.847038 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.847063 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.847093 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.847113 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.950215 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.950263 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.950274 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.950291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.950303 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.053732 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.053859 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.053877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.053901 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.053920 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.156627 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.158005 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.158042 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.158067 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.158085 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.261144 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.261239 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.261264 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.261296 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.261320 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.323160 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:03 crc kubenswrapper[5008]: E0129 15:29:03.323332 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.333304 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:11:19.050432148 +0000 UTC Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.364379 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.364452 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.364473 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.364494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.364511 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.466562 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.466623 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.466634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.466657 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.466678 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.570010 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.570062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.570079 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.570098 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.570116 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.672976 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.673014 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.673025 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.673041 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.673054 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.776209 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.776253 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.776266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.776284 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.776297 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.879368 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.879434 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.879452 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.879479 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.879544 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.982544 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.982642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.982663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.982690 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.982707 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.086907 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.086949 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.086973 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.086995 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.087010 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.190086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.190166 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.190190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.190222 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.190248 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.292923 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.292978 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.292993 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.293015 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.293032 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.322750 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.322945 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:04 crc kubenswrapper[5008]: E0129 15:29:04.323150 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.323171 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:04 crc kubenswrapper[5008]: E0129 15:29:04.323531 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:04 crc kubenswrapper[5008]: E0129 15:29:04.323625 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.334088 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:25:48.378985485 +0000 UTC Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.395519 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.395958 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.395971 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.395987 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.395999 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.499165 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.499211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.499220 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.499235 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.499246 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.603250 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.603285 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.603296 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.603316 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.603328 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.706379 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.706423 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.706432 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.706446 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.706456 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.808255 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.808301 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.808337 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.808355 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.808364 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.911181 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.911237 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.911254 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.911279 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.911296 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.014372 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.014414 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.014425 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.014442 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.014457 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.116908 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.116996 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.117009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.117025 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.117036 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.219468 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.219520 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.219530 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.219542 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.219552 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.322821 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:05 crc kubenswrapper[5008]: E0129 15:29:05.322945 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.328848 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.328931 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.328962 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.329016 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.329046 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.334370 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:11:31.369764543 +0000 UTC Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.432546 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.432594 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.432611 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.432634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.432654 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.535734 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.535848 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.535869 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.535893 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.535910 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.639020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.639070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.639084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.639102 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.639116 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.742544 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.742585 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.742595 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.742611 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.742621 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.844383 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.844419 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.844427 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.844440 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.844450 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.947164 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.947230 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.947252 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.947283 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.947304 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.050611 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.050876 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.051068 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.051122 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.051162 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.153684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.153857 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.153884 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.153909 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.153927 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.256974 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.257026 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.257042 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.257064 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.257081 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.323014 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.323089 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.323192 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:06 crc kubenswrapper[5008]: E0129 15:29:06.323302 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:06 crc kubenswrapper[5008]: E0129 15:29:06.323454 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:06 crc kubenswrapper[5008]: E0129 15:29:06.323732 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.335181 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 19:48:34.334377245 +0000 UTC Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.337885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.337970 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.337998 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.338030 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.338053 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.381689 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.381749 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.381759 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.381775 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.381808 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.424530 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8"] Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.425043 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.426991 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.427487 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.427631 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.427813 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.485301 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.485277721 podStartE2EDuration="23.485277721s" podCreationTimestamp="2026-01-29 15:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.484701603 +0000 UTC m=+90.157555840" watchObservedRunningTime="2026-01-29 15:29:06.485277721 +0000 UTC m=+90.158131968" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.501759 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-78bl2" podStartSLOduration=68.501742532 podStartE2EDuration="1m8.501742532s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.501654149 +0000 UTC m=+90.174508436" watchObservedRunningTime="2026-01-29 15:29:06.501742532 +0000 UTC m=+90.174596789" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.524153 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qj8wb" podStartSLOduration=68.524131007 podStartE2EDuration="1m8.524131007s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.513113285 +0000 UTC m=+90.185967532" watchObservedRunningTime="2026-01-29 15:29:06.524131007 +0000 UTC m=+90.196985244" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.550935 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f49fada-aec3-467e-93ac-1a06f27ea564-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.551029 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f49fada-aec3-467e-93ac-1a06f27ea564-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.551138 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.551177 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.551217 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f49fada-aec3-467e-93ac-1a06f27ea564-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.568052 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-42hcz" podStartSLOduration=68.56802962 podStartE2EDuration="1m8.56802962s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.557812253 +0000 UTC m=+90.230666520" watchObservedRunningTime="2026-01-29 15:29:06.56802962 +0000 UTC m=+90.240883867" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.568279 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" podStartSLOduration=67.568273737 podStartE2EDuration="1m7.568273737s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.567310008 +0000 UTC m=+90.240164245" watchObservedRunningTime="2026-01-29 15:29:06.568273737 +0000 UTC m=+90.241127994" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.593081 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=67.593061667 podStartE2EDuration="1m7.593061667s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.592621354 +0000 UTC m=+90.265475591" watchObservedRunningTime="2026-01-29 15:29:06.593061667 +0000 UTC m=+90.265915904" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.631385 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.631370606 podStartE2EDuration="1m8.631370606s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.617691642 +0000 UTC m=+90.290545879" watchObservedRunningTime="2026-01-29 15:29:06.631370606 +0000 UTC m=+90.304224843" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.631492 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.631488721 podStartE2EDuration="1m8.631488721s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.630867901 +0000 UTC m=+90.303722158" watchObservedRunningTime="2026-01-29 15:29:06.631488721 +0000 UTC m=+90.304342958" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.641060 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wtvvb" podStartSLOduration=68.641038357 podStartE2EDuration="1m8.641038357s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.640565452 +0000 UTC m=+90.313419689" watchObservedRunningTime="2026-01-29 15:29:06.641038357 +0000 UTC m=+90.313892604" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651660 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f49fada-aec3-467e-93ac-1a06f27ea564-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651710 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651730 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651763 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f49fada-aec3-467e-93ac-1a06f27ea564-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651713 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podStartSLOduration=68.651698478 podStartE2EDuration="1m8.651698478s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.651413279 +0000 UTC m=+90.324267526" watchObservedRunningTime="2026-01-29 15:29:06.651698478 +0000 UTC m=+90.324552715" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651809 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651806 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f49fada-aec3-467e-93ac-1a06f27ea564-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651817 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.652622 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f49fada-aec3-467e-93ac-1a06f27ea564-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.666647 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f49fada-aec3-467e-93ac-1a06f27ea564-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.669630 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f49fada-aec3-467e-93ac-1a06f27ea564-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.696376 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.696361155 podStartE2EDuration="40.696361155s" podCreationTimestamp="2026-01-29 15:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.696011724 +0000 UTC m=+90.368865981" watchObservedRunningTime="2026-01-29 15:29:06.696361155 +0000 UTC m=+90.369215392" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.744392 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: W0129 15:29:06.770053 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f49fada_aec3_467e_93ac_1a06f27ea564.slice/crio-ca932034e38ef1a752e34502bfeecaed13983b0e2f9df41a337106eadaa1f7bf WatchSource:0}: Error finding container ca932034e38ef1a752e34502bfeecaed13983b0e2f9df41a337106eadaa1f7bf: Status 404 returned error can't find the container with id ca932034e38ef1a752e34502bfeecaed13983b0e2f9df41a337106eadaa1f7bf Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.057371 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" event={"ID":"8f49fada-aec3-467e-93ac-1a06f27ea564","Type":"ContainerStarted","Data":"654afe7caf138947547858d117e989e4374d20d9d127e21145889b20e89cb559"} Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.057448 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" event={"ID":"8f49fada-aec3-467e-93ac-1a06f27ea564","Type":"ContainerStarted","Data":"ca932034e38ef1a752e34502bfeecaed13983b0e2f9df41a337106eadaa1f7bf"} Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.074296 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" podStartSLOduration=69.074061182 podStartE2EDuration="1m9.074061182s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:07.073864216 +0000 UTC m=+90.746718493" watchObservedRunningTime="2026-01-29 15:29:07.074061182 +0000 UTC m=+90.746915429" Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.323256 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:07 crc kubenswrapper[5008]: E0129 15:29:07.324553 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.335551 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:48:34.051723849 +0000 UTC Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.335672 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.350381 5008 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 15:29:08 crc kubenswrapper[5008]: I0129 15:29:08.323495 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:08 crc kubenswrapper[5008]: I0129 15:29:08.323605 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:08 crc kubenswrapper[5008]: I0129 15:29:08.323644 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:08 crc kubenswrapper[5008]: E0129 15:29:08.323725 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:08 crc kubenswrapper[5008]: E0129 15:29:08.323887 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:08 crc kubenswrapper[5008]: E0129 15:29:08.323960 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:09 crc kubenswrapper[5008]: I0129 15:29:09.323363 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:09 crc kubenswrapper[5008]: E0129 15:29:09.323628 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:10 crc kubenswrapper[5008]: I0129 15:29:10.323608 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:10 crc kubenswrapper[5008]: I0129 15:29:10.323714 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:10 crc kubenswrapper[5008]: E0129 15:29:10.323763 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:10 crc kubenswrapper[5008]: E0129 15:29:10.323873 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:10 crc kubenswrapper[5008]: I0129 15:29:10.323940 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:10 crc kubenswrapper[5008]: E0129 15:29:10.324001 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:11 crc kubenswrapper[5008]: I0129 15:29:11.322878 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:11 crc kubenswrapper[5008]: E0129 15:29:11.323387 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:12 crc kubenswrapper[5008]: I0129 15:29:12.322662 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:12 crc kubenswrapper[5008]: I0129 15:29:12.322697 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:12 crc kubenswrapper[5008]: I0129 15:29:12.322852 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:12 crc kubenswrapper[5008]: E0129 15:29:12.322932 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:12 crc kubenswrapper[5008]: E0129 15:29:12.323054 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:12 crc kubenswrapper[5008]: E0129 15:29:12.323134 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:13 crc kubenswrapper[5008]: I0129 15:29:13.323860 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:13 crc kubenswrapper[5008]: E0129 15:29:13.324725 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:13 crc kubenswrapper[5008]: I0129 15:29:13.325675 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:29:13 crc kubenswrapper[5008]: E0129 15:29:13.325928 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:29:14 crc kubenswrapper[5008]: I0129 15:29:14.323244 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:14 crc kubenswrapper[5008]: I0129 15:29:14.323255 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:14 crc kubenswrapper[5008]: E0129 15:29:14.323449 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:14 crc kubenswrapper[5008]: I0129 15:29:14.323275 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:14 crc kubenswrapper[5008]: E0129 15:29:14.323610 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:14 crc kubenswrapper[5008]: E0129 15:29:14.324030 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:15 crc kubenswrapper[5008]: I0129 15:29:15.323523 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:15 crc kubenswrapper[5008]: E0129 15:29:15.323661 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:16 crc kubenswrapper[5008]: I0129 15:29:16.323431 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:16 crc kubenswrapper[5008]: I0129 15:29:16.323471 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:16 crc kubenswrapper[5008]: I0129 15:29:16.323580 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:16 crc kubenswrapper[5008]: E0129 15:29:16.323649 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:16 crc kubenswrapper[5008]: E0129 15:29:16.323729 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:16 crc kubenswrapper[5008]: E0129 15:29:16.323796 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:16 crc kubenswrapper[5008]: I0129 15:29:16.990346 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:16 crc kubenswrapper[5008]: E0129 15:29:16.990752 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:16 crc kubenswrapper[5008]: E0129 15:29:16.990943 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.990917257 +0000 UTC m=+164.663771494 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:17 crc kubenswrapper[5008]: I0129 15:29:17.325033 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:17 crc kubenswrapper[5008]: E0129 15:29:17.325208 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:18 crc kubenswrapper[5008]: I0129 15:29:18.323081 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:18 crc kubenswrapper[5008]: I0129 15:29:18.323074 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:18 crc kubenswrapper[5008]: I0129 15:29:18.323205 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:18 crc kubenswrapper[5008]: E0129 15:29:18.323404 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:18 crc kubenswrapper[5008]: E0129 15:29:18.323931 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:18 crc kubenswrapper[5008]: E0129 15:29:18.324257 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:19 crc kubenswrapper[5008]: I0129 15:29:19.323301 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:19 crc kubenswrapper[5008]: E0129 15:29:19.323595 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:20 crc kubenswrapper[5008]: I0129 15:29:20.323581 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:20 crc kubenswrapper[5008]: I0129 15:29:20.323656 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:20 crc kubenswrapper[5008]: I0129 15:29:20.323606 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:20 crc kubenswrapper[5008]: E0129 15:29:20.323737 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:20 crc kubenswrapper[5008]: E0129 15:29:20.323967 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:20 crc kubenswrapper[5008]: E0129 15:29:20.324132 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:21 crc kubenswrapper[5008]: I0129 15:29:21.323168 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:21 crc kubenswrapper[5008]: E0129 15:29:21.323433 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:22 crc kubenswrapper[5008]: I0129 15:29:22.323209 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:22 crc kubenswrapper[5008]: I0129 15:29:22.323300 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:22 crc kubenswrapper[5008]: I0129 15:29:22.323217 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:22 crc kubenswrapper[5008]: E0129 15:29:22.323451 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:22 crc kubenswrapper[5008]: E0129 15:29:22.323636 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:22 crc kubenswrapper[5008]: E0129 15:29:22.323875 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:23 crc kubenswrapper[5008]: I0129 15:29:23.322822 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:23 crc kubenswrapper[5008]: E0129 15:29:23.323141 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:24 crc kubenswrapper[5008]: I0129 15:29:24.322957 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:24 crc kubenswrapper[5008]: E0129 15:29:24.323085 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:24 crc kubenswrapper[5008]: I0129 15:29:24.323289 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:24 crc kubenswrapper[5008]: E0129 15:29:24.323337 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:24 crc kubenswrapper[5008]: I0129 15:29:24.324031 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:24 crc kubenswrapper[5008]: E0129 15:29:24.324401 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:24 crc kubenswrapper[5008]: I0129 15:29:24.324661 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:29:24 crc kubenswrapper[5008]: E0129 15:29:24.324769 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:29:25 crc kubenswrapper[5008]: I0129 15:29:25.322908 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:25 crc kubenswrapper[5008]: E0129 15:29:25.323047 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:26 crc kubenswrapper[5008]: I0129 15:29:26.323586 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:26 crc kubenswrapper[5008]: I0129 15:29:26.323658 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:26 crc kubenswrapper[5008]: E0129 15:29:26.323744 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:26 crc kubenswrapper[5008]: I0129 15:29:26.323667 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:26 crc kubenswrapper[5008]: E0129 15:29:26.323865 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:26 crc kubenswrapper[5008]: E0129 15:29:26.323965 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:27 crc kubenswrapper[5008]: I0129 15:29:27.323394 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:27 crc kubenswrapper[5008]: E0129 15:29:27.324759 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:28 crc kubenswrapper[5008]: I0129 15:29:28.323754 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:28 crc kubenswrapper[5008]: I0129 15:29:28.323906 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:28 crc kubenswrapper[5008]: I0129 15:29:28.323768 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:28 crc kubenswrapper[5008]: E0129 15:29:28.323961 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:28 crc kubenswrapper[5008]: E0129 15:29:28.324093 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:28 crc kubenswrapper[5008]: E0129 15:29:28.324292 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:29 crc kubenswrapper[5008]: I0129 15:29:29.323758 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:29 crc kubenswrapper[5008]: E0129 15:29:29.324506 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:30 crc kubenswrapper[5008]: I0129 15:29:30.323402 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:30 crc kubenswrapper[5008]: I0129 15:29:30.323464 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:30 crc kubenswrapper[5008]: I0129 15:29:30.323515 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:30 crc kubenswrapper[5008]: E0129 15:29:30.323708 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:30 crc kubenswrapper[5008]: E0129 15:29:30.323888 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:30 crc kubenswrapper[5008]: E0129 15:29:30.324002 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:31 crc kubenswrapper[5008]: I0129 15:29:31.323458 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:31 crc kubenswrapper[5008]: E0129 15:29:31.323869 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:32 crc kubenswrapper[5008]: I0129 15:29:32.323607 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:32 crc kubenswrapper[5008]: E0129 15:29:32.323734 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:32 crc kubenswrapper[5008]: I0129 15:29:32.323863 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:32 crc kubenswrapper[5008]: E0129 15:29:32.324027 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:32 crc kubenswrapper[5008]: I0129 15:29:32.323872 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:32 crc kubenswrapper[5008]: E0129 15:29:32.324210 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:33 crc kubenswrapper[5008]: I0129 15:29:33.323290 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:33 crc kubenswrapper[5008]: E0129 15:29:33.323470 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.154247 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/1.log" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.154947 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/0.log" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.155010 5008 generic.go:334] "Generic (PLEG): container finished" podID="cdd8ae23-3f9f-49f8-928d-46dad823fde4" containerID="af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873" exitCode=1 Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.155050 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerDied","Data":"af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873"} Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.155105 5008 scope.go:117] "RemoveContainer" containerID="a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.155731 5008 scope.go:117] "RemoveContainer" containerID="af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873" Jan 29 15:29:34 crc kubenswrapper[5008]: E0129 15:29:34.156110 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-42hcz_openshift-multus(cdd8ae23-3f9f-49f8-928d-46dad823fde4)\"" pod="openshift-multus/multus-42hcz" podUID="cdd8ae23-3f9f-49f8-928d-46dad823fde4" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.322871 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.322872 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:34 crc kubenswrapper[5008]: E0129 15:29:34.323704 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:34 crc kubenswrapper[5008]: E0129 15:29:34.323349 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.322941 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:34 crc kubenswrapper[5008]: E0129 15:29:34.323890 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:35 crc kubenswrapper[5008]: I0129 15:29:35.161186 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/1.log" Jan 29 15:29:35 crc kubenswrapper[5008]: I0129 15:29:35.323200 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:35 crc kubenswrapper[5008]: E0129 15:29:35.323415 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:36 crc kubenswrapper[5008]: I0129 15:29:36.322942 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:36 crc kubenswrapper[5008]: I0129 15:29:36.322999 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:36 crc kubenswrapper[5008]: E0129 15:29:36.323167 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:36 crc kubenswrapper[5008]: I0129 15:29:36.323208 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:36 crc kubenswrapper[5008]: E0129 15:29:36.323455 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:36 crc kubenswrapper[5008]: E0129 15:29:36.323877 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:37 crc kubenswrapper[5008]: E0129 15:29:37.318386 5008 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 15:29:37 crc kubenswrapper[5008]: I0129 15:29:37.323070 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:37 crc kubenswrapper[5008]: E0129 15:29:37.326260 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:37 crc kubenswrapper[5008]: E0129 15:29:37.439998 5008 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:29:38 crc kubenswrapper[5008]: I0129 15:29:38.323537 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:38 crc kubenswrapper[5008]: I0129 15:29:38.323658 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:38 crc kubenswrapper[5008]: E0129 15:29:38.323684 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:38 crc kubenswrapper[5008]: I0129 15:29:38.323864 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:38 crc kubenswrapper[5008]: E0129 15:29:38.325589 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:38 crc kubenswrapper[5008]: E0129 15:29:38.325709 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:38 crc kubenswrapper[5008]: I0129 15:29:38.327434 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:29:39 crc kubenswrapper[5008]: I0129 15:29:39.323308 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:39 crc kubenswrapper[5008]: E0129 15:29:39.323539 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.165515 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kkc6c"] Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.185251 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/3.log" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.188818 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.188836 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:29:40 crc kubenswrapper[5008]: E0129 15:29:40.188974 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.189757 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.218525 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podStartSLOduration=102.218504856 podStartE2EDuration="1m42.218504856s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:40.217554216 +0000 UTC m=+123.890408453" watchObservedRunningTime="2026-01-29 15:29:40.218504856 +0000 UTC m=+123.891359093" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.323299 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.323337 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:40 crc kubenswrapper[5008]: E0129 15:29:40.324106 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.323354 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:40 crc kubenswrapper[5008]: E0129 15:29:40.324303 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:40 crc kubenswrapper[5008]: E0129 15:29:40.324376 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:42 crc kubenswrapper[5008]: I0129 15:29:42.323697 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:42 crc kubenswrapper[5008]: I0129 15:29:42.323844 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:42 crc kubenswrapper[5008]: I0129 15:29:42.323844 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:42 crc kubenswrapper[5008]: E0129 15:29:42.323921 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:42 crc kubenswrapper[5008]: I0129 15:29:42.323973 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:42 crc kubenswrapper[5008]: E0129 15:29:42.324154 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:42 crc kubenswrapper[5008]: E0129 15:29:42.324286 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:42 crc kubenswrapper[5008]: E0129 15:29:42.324497 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:42 crc kubenswrapper[5008]: E0129 15:29:42.440883 5008 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:29:44 crc kubenswrapper[5008]: I0129 15:29:44.323697 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:44 crc kubenswrapper[5008]: I0129 15:29:44.323853 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:44 crc kubenswrapper[5008]: I0129 15:29:44.323749 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:44 crc kubenswrapper[5008]: I0129 15:29:44.323879 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:44 crc kubenswrapper[5008]: E0129 15:29:44.323997 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:44 crc kubenswrapper[5008]: E0129 15:29:44.324128 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:44 crc kubenswrapper[5008]: E0129 15:29:44.324225 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:44 crc kubenswrapper[5008]: E0129 15:29:44.324309 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:45 crc kubenswrapper[5008]: I0129 15:29:45.323268 5008 scope.go:117] "RemoveContainer" containerID="af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873" Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.211616 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/1.log" Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.211673 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerStarted","Data":"a79b05ecc77ae822ab75bfdce779bbfbb375857cfbf47a090a83a690373dc6e0"} Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.322816 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:46 crc kubenswrapper[5008]: E0129 15:29:46.322952 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.322968 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.323012 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:46 crc kubenswrapper[5008]: E0129 15:29:46.323091 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.323139 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:46 crc kubenswrapper[5008]: E0129 15:29:46.323191 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:46 crc kubenswrapper[5008]: E0129 15:29:46.323231 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:47 crc kubenswrapper[5008]: E0129 15:29:47.441503 5008 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:29:48 crc kubenswrapper[5008]: I0129 15:29:48.323640 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:48 crc kubenswrapper[5008]: I0129 15:29:48.323731 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:48 crc kubenswrapper[5008]: I0129 15:29:48.323655 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:48 crc kubenswrapper[5008]: I0129 15:29:48.323753 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:48 crc kubenswrapper[5008]: E0129 15:29:48.323906 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:48 crc kubenswrapper[5008]: E0129 15:29:48.324093 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:48 crc kubenswrapper[5008]: E0129 15:29:48.324222 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:48 crc kubenswrapper[5008]: E0129 15:29:48.324315 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:50 crc kubenswrapper[5008]: I0129 15:29:50.323373 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:50 crc kubenswrapper[5008]: I0129 15:29:50.323523 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:50 crc kubenswrapper[5008]: I0129 15:29:50.323373 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:50 crc kubenswrapper[5008]: E0129 15:29:50.323589 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:50 crc kubenswrapper[5008]: I0129 15:29:50.323404 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:50 crc kubenswrapper[5008]: E0129 15:29:50.323751 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:50 crc kubenswrapper[5008]: E0129 15:29:50.324005 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:50 crc kubenswrapper[5008]: E0129 15:29:50.324091 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:52 crc kubenswrapper[5008]: I0129 15:29:52.322852 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:52 crc kubenswrapper[5008]: I0129 15:29:52.322911 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:52 crc kubenswrapper[5008]: E0129 15:29:52.322980 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:52 crc kubenswrapper[5008]: I0129 15:29:52.323170 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:52 crc kubenswrapper[5008]: E0129 15:29:52.323178 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:52 crc kubenswrapper[5008]: E0129 15:29:52.323215 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:52 crc kubenswrapper[5008]: I0129 15:29:52.323246 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:52 crc kubenswrapper[5008]: E0129 15:29:52.323428 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.323375 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.323592 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.323687 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.323703 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.328178 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.328753 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.328939 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.329026 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.329231 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.329644 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.445190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.498004 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fsx74"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.498755 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6wmrp"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.499310 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.499898 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.533463 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.533867 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.534893 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: W0129 15:29:57.535023 5008 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 29 15:29:57 crc kubenswrapper[5008]: E0129 15:29:57.535044 5008 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 15:29:57 crc kubenswrapper[5008]: W0129 15:29:57.535073 5008 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: secrets "default-dockercfg-chnjx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 29 15:29:57 crc kubenswrapper[5008]: E0129 15:29:57.535083 5008 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-chnjx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.535998 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.536207 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.546306 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w978\" (UniqueName: \"kubernetes.io/projected/64cf2ff9-40f4-48a5-a16c-6513cf0470bd-kube-api-access-2w978\") pod \"downloads-7954f5f757-6wmrp\" (UID: \"64cf2ff9-40f4-48a5-a16c-6513cf0470bd\") " pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.546348 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6db03bb1-4833-4d3f-82d5-08ec5710251f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.546366 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrtr\" (UniqueName: \"kubernetes.io/projected/6db03bb1-4833-4d3f-82d5-08ec5710251f-kube-api-access-wmrtr\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.546439 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-images\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.546464 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.563551 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.566440 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.566677 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.567620 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.567631 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.567765 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.567945 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.567975 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.568668 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.568834 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.568991 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.569124 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.569204 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.569226 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.569335 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.571295 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.572033 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-468fl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.572370 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.572954 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.573086 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.573238 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.573555 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.573961 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.576129 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wkn92"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.576372 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.576677 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.577293 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.579319 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.581754 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.582045 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4l85w"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.582507 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.582940 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.585287 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.585419 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.585455 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.585527 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.588261 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.595039 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.595053 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.595042 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.596209 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v7r8x"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.596702 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.602577 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.603416 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.603768 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.604190 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.604751 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.622984 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.625041 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.625527 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.625642 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.626186 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.629628 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.629986 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.630174 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.630422 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.630554 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.630844 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.630925 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631010 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631331 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631482 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631692 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631746 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631925 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631952 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632085 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632178 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632184 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632282 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632089 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632362 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632390 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632406 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632358 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-lkcrp"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632485 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632539 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632641 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632777 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632908 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.633070 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.635752 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636052 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636468 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636610 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636677 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636774 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636841 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636936 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.637076 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.637273 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.637703 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.637858 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638044 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638270 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638498 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638606 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638642 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638697 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638830 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638967 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639064 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639163 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639277 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639371 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639461 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639544 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.640090 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.640417 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.640812 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.641113 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.641319 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.641835 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2dsnp"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.642495 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647115 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-config\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647142 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647177 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-config\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647205 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bmd\" (UniqueName: \"kubernetes.io/projected/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-kube-api-access-v4bmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647228 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647250 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-serving-cert\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647274 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647296 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-serving-cert\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647317 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647339 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647366 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrtr\" (UniqueName: \"kubernetes.io/projected/6db03bb1-4833-4d3f-82d5-08ec5710251f-kube-api-access-wmrtr\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647387 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcl2c\" (UniqueName: \"kubernetes.io/projected/1c37e4bb-792b-4317-87ae-ca4172740500-kube-api-access-mcl2c\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647406 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-image-import-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647426 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647449 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647468 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-serving-cert\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647490 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r42b\" (UniqueName: \"kubernetes.io/projected/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-kube-api-access-8r42b\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647514 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-service-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647535 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-etcd-serving-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647558 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nktwv\" (UniqueName: \"kubernetes.io/projected/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-kube-api-access-nktwv\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647581 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647601 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647622 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647644 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647666 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647688 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647720 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-encryption-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647755 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8d495a4f-d952-4050-a895-e6650c083e0d-machine-approver-tls\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647777 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647822 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-client\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647842 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng4mr\" (UniqueName: \"kubernetes.io/projected/00332b75-a73b-49c1-9b72-73445baccf6d-kube-api-access-ng4mr\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647863 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647886 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696d81dd-3f1a-4c58-ae69-29fff54e590b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647917 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647939 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w978\" (UniqueName: \"kubernetes.io/projected/64cf2ff9-40f4-48a5-a16c-6513cf0470bd-kube-api-access-2w978\") pod \"downloads-7954f5f757-6wmrp\" (UID: \"64cf2ff9-40f4-48a5-a16c-6513cf0470bd\") " pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647973 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647992 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648012 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-node-pullsecrets\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648032 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00332b75-a73b-49c1-9b72-73445baccf6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648050 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648091 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6db03bb1-4833-4d3f-82d5-08ec5710251f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648117 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-client\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648138 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-encryption-config\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648172 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrrl8\" (UniqueName: \"kubernetes.io/projected/8d495a4f-d952-4050-a895-e6650c083e0d-kube-api-access-rrrl8\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648195 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648217 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-policies\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648238 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-dir\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648263 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-images\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648285 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/696d81dd-3f1a-4c58-ae69-29fff54e590b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648320 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648341 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648358 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648378 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-audit-dir\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648398 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc2mb\" (UniqueName: \"kubernetes.io/projected/696d81dd-3f1a-4c58-ae69-29fff54e590b-kube-api-access-xc2mb\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648421 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648426 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648441 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648462 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqdxf\" (UniqueName: \"kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648495 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cdqj\" (UniqueName: \"kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648515 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pz26\" (UniqueName: \"kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648545 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-serving-cert\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648567 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-auth-proxy-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648587 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-etcd-client\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648606 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00332b75-a73b-49c1-9b72-73445baccf6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648629 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-audit\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648648 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plh2t\" (UniqueName: \"kubernetes.io/projected/653b37fe-d452-4111-b27f-ef75530abe41-kube-api-access-plh2t\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.649809 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650196 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650325 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650406 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650561 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650594 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650704 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-images\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650771 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650909 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.651047 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.651147 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.651202 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.651257 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.651744 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.652383 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.652442 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.652813 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.653163 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.653303 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.653385 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.656114 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.656567 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.657354 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.665937 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.667671 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6db03bb1-4833-4d3f-82d5-08ec5710251f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.670658 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.670915 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.678015 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.678272 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.678374 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.678700 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.678899 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.679331 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.679917 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.681682 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.681889 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zs2tk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.682520 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.685840 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.686284 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cb6xn"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.686687 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.687138 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.687551 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.687698 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.701719 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.705815 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w2lv5"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.706476 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.708652 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.709297 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.710544 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.711041 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.711570 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.717280 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.717373 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.718106 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.718287 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.718554 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.718839 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.719085 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.720824 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.720863 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.722349 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fsx74"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.722580 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.729033 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.733057 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.734184 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.735553 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w2lv5"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.737201 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.737611 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.738767 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.742521 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cb6xn"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.742559 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.744935 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.745878 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qs6wx"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.746354 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.746909 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-p7nds"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.747682 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749186 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0bc350-e279-4e74-a70e-c89593f115f3-config\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749201 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749226 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cdqj\" (UniqueName: \"kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749269 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pz26\" (UniqueName: \"kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749287 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0bc350-e279-4e74-a70e-c89593f115f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749327 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749348 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-config\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749363 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749382 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-auth-proxy-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749397 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-etcd-client\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749413 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kqn\" (UniqueName: \"kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749431 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-audit\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749446 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plh2t\" (UniqueName: \"kubernetes.io/projected/653b37fe-d452-4111-b27f-ef75530abe41-kube-api-access-plh2t\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749461 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-config\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749477 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c5e8be2-fe94-488c-801e-d1a56700bfa5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749495 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749509 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749524 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-trusted-ca\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749538 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsr8x\" (UniqueName: \"kubernetes.io/projected/cb93f308-4554-41a0-a5c7-28d516a419c7-kube-api-access-rsr8x\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749555 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749572 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcl2c\" (UniqueName: \"kubernetes.io/projected/1c37e4bb-792b-4317-87ae-ca4172740500-kube-api-access-mcl2c\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749587 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-serving-cert\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749603 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf25c\" (UniqueName: \"kubernetes.io/projected/3c5e8be2-fe94-488c-801e-d1a56700bfa5-kube-api-access-rf25c\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749630 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749645 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nktwv\" (UniqueName: \"kubernetes.io/projected/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-kube-api-access-nktwv\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749660 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749675 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749693 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749710 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749727 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749743 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749760 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-encryption-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749776 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8d495a4f-d952-4050-a895-e6650c083e0d-machine-approver-tls\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749808 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec989c54-8ec3-4f9d-87b0-2665776ffd15-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749824 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749840 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749855 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749870 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749885 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749908 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696d81dd-3f1a-4c58-ae69-29fff54e590b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750740 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-config\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750809 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2msjg\" (UniqueName: \"kubernetes.io/projected/820dc798-ef25-4bda-947f-8c66b290816d-kube-api-access-2msjg\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750840 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-metrics-certs\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750856 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750882 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f9xk\" (UniqueName: \"kubernetes.io/projected/380625b0-02b5-417a-bd1e-7ccf56f56059-kube-api-access-7f9xk\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750898 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfxpn\" (UniqueName: \"kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750917 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-client\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750933 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380625b0-02b5-417a-bd1e-7ccf56f56059-service-ca-bundle\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.751396 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696d81dd-3f1a-4c58-ae69-29fff54e590b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.751860 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-auth-proxy-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.753647 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.755014 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.755445 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.755691 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757041 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-encryption-config\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757227 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757300 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-policies\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757639 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-dir\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757682 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757712 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/696d81dd-3f1a-4c58-ae69-29fff54e590b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757733 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757790 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757813 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec989c54-8ec3-4f9d-87b0-2665776ffd15-config\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757839 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-audit-dir\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757863 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc2mb\" (UniqueName: \"kubernetes.io/projected/696d81dd-3f1a-4c58-ae69-29fff54e590b-kube-api-access-xc2mb\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757918 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757945 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpph\" (UniqueName: \"kubernetes.io/projected/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-kube-api-access-bgpph\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757965 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757981 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqdxf\" (UniqueName: \"kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757999 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1408f146-4652-41e3-8947-2f230e515750-metrics-tls\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758048 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758066 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-serving-cert\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758085 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758100 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758114 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-tmpfs\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758130 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00332b75-a73b-49c1-9b72-73445baccf6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758153 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4bmd\" (UniqueName: \"kubernetes.io/projected/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-kube-api-access-v4bmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758218 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-config\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758234 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758252 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-serving-cert\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758268 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec989c54-8ec3-4f9d-87b0-2665776ffd15-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758296 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb93f308-4554-41a0-a5c7-28d516a419c7-proxy-tls\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758311 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsqb8\" (UniqueName: \"kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758330 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-serving-cert\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758346 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758399 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.759707 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.760177 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.760286 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-468fl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.761097 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.761401 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-config\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.775995 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.776485 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-client\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.776971 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-audit\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.776999 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.777931 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-policies\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778050 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778113 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b987d67-e424-4286-a25d-11bfc4d1e577-serving-cert\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778139 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6lwz\" (UniqueName: \"kubernetes.io/projected/657b37ac-43ff-4309-9bfa-5220bccb08c0-kube-api-access-r6lwz\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778183 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-image-import-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778201 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778223 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/820dc798-ef25-4bda-947f-8c66b290816d-metrics-tls\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778225 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778246 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r42b\" (UniqueName: \"kubernetes.io/projected/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-kube-api-access-8r42b\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778284 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-service-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778305 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-etcd-serving-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778337 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778345 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-etcd-client\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778368 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778390 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-encryption-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778743 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8d495a4f-d952-4050-a895-e6650c083e0d-machine-approver-tls\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779235 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779263 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779296 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7qjf\" (UniqueName: \"kubernetes.io/projected/5b987d67-e424-4286-a25d-11bfc4d1e577-kube-api-access-r7qjf\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779301 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-dir\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779321 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779344 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779372 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779390 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779453 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e0bc350-e279-4e74-a70e-c89593f115f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779475 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779496 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779516 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780141 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-encryption-config\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780450 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780657 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-client\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780683 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng4mr\" (UniqueName: \"kubernetes.io/projected/00332b75-a73b-49c1-9b72-73445baccf6d-kube-api-access-ng4mr\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780709 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780736 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb93f308-4554-41a0-a5c7-28d516a419c7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780759 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780823 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-node-pullsecrets\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780834 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780847 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00332b75-a73b-49c1-9b72-73445baccf6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780872 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780892 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780955 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5jrc\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-kube-api-access-d5jrc\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-default-certificate\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.781007 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrrl8\" (UniqueName: \"kubernetes.io/projected/8d495a4f-d952-4050-a895-e6650c083e0d-kube-api-access-rrrl8\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.781029 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.781512 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-node-pullsecrets\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.781575 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00332b75-a73b-49c1-9b72-73445baccf6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.781692 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-audit-dir\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.782445 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-serving-cert\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.783097 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-serving-cert\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.783311 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.783864 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.785899 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.786091 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00332b75-a73b-49c1-9b72-73445baccf6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.786356 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-serving-cert\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.786383 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.786909 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787452 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787541 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-stats-auth\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787580 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1408f146-4652-41e3-8947-2f230e515750-trusted-ca\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787618 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmjd2\" (UniqueName: \"kubernetes.io/projected/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-kube-api-access-cmjd2\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787704 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787745 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787866 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787900 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787933 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng4c5\" (UniqueName: \"kubernetes.io/projected/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-kube-api-access-ng4c5\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.788125 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.789007 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.790378 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-image-import-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.791277 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/696d81dd-3f1a-4c58-ae69-29fff54e590b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.792016 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wkn92"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.792036 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.792226 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4l85w"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.792560 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-etcd-serving-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.793150 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-service-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.793370 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.794634 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-serving-cert\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.795063 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.795327 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.795822 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-client\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.795941 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.797653 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.802696 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.804247 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.806430 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.811208 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zs2tk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.818943 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v7r8x"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.821226 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.822741 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.822776 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.825100 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.827409 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.828733 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.830478 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.832127 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.833566 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.835257 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-p7nds"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.836923 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.839952 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2dsnp"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.841597 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.841879 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.843198 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6wmrp"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.844393 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.847582 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.848906 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g9x2n"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.850123 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-tw5d5"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.850256 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.851196 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tw5d5"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.851309 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tw5d5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.852254 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g9x2n"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.879685 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrtr\" (UniqueName: \"kubernetes.io/projected/6db03bb1-4833-4d3f-82d5-08ec5710251f-kube-api-access-wmrtr\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892619 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f9xk\" (UniqueName: \"kubernetes.io/projected/380625b0-02b5-417a-bd1e-7ccf56f56059-kube-api-access-7f9xk\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892657 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfxpn\" (UniqueName: \"kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892679 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380625b0-02b5-417a-bd1e-7ccf56f56059-service-ca-bundle\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892707 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892726 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892748 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec989c54-8ec3-4f9d-87b0-2665776ffd15-config\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892768 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892809 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpph\" (UniqueName: \"kubernetes.io/projected/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-kube-api-access-bgpph\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892839 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1408f146-4652-41e3-8947-2f230e515750-metrics-tls\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892856 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892872 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-tmpfs\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892889 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892905 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892929 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec989c54-8ec3-4f9d-87b0-2665776ffd15-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892954 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsqb8\" (UniqueName: \"kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892975 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb93f308-4554-41a0-a5c7-28d516a419c7-proxy-tls\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892992 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893010 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6lwz\" (UniqueName: \"kubernetes.io/projected/657b37ac-43ff-4309-9bfa-5220bccb08c0-kube-api-access-r6lwz\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893026 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b987d67-e424-4286-a25d-11bfc4d1e577-serving-cert\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893044 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/820dc798-ef25-4bda-947f-8c66b290816d-metrics-tls\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893068 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893084 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893101 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7qjf\" (UniqueName: \"kubernetes.io/projected/5b987d67-e424-4286-a25d-11bfc4d1e577-kube-api-access-r7qjf\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893118 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893138 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893164 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e0bc350-e279-4e74-a70e-c89593f115f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893182 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893217 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893235 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893257 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb93f308-4554-41a0-a5c7-28d516a419c7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893276 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893294 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5jrc\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-kube-api-access-d5jrc\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893309 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-default-certificate\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893330 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1408f146-4652-41e3-8947-2f230e515750-trusted-ca\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893348 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmjd2\" (UniqueName: \"kubernetes.io/projected/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-kube-api-access-cmjd2\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893363 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-stats-auth\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893373 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-tmpfs\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893379 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893477 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893501 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893522 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng4c5\" (UniqueName: \"kubernetes.io/projected/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-kube-api-access-ng4c5\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893546 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893562 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0bc350-e279-4e74-a70e-c89593f115f3-config\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893588 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0bc350-e279-4e74-a70e-c89593f115f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893603 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893615 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380625b0-02b5-417a-bd1e-7ccf56f56059-service-ca-bundle\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893633 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893660 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-config\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893675 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kqn\" (UniqueName: \"kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893705 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c5e8be2-fe94-488c-801e-d1a56700bfa5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893706 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893731 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsr8x\" (UniqueName: \"kubernetes.io/projected/cb93f308-4554-41a0-a5c7-28d516a419c7-kube-api-access-rsr8x\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893753 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893773 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-trusted-ca\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893846 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf25c\" (UniqueName: \"kubernetes.io/projected/3c5e8be2-fe94-488c-801e-d1a56700bfa5-kube-api-access-rf25c\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893870 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893875 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893892 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893909 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893930 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec989c54-8ec3-4f9d-87b0-2665776ffd15-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893947 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893962 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893980 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2msjg\" (UniqueName: \"kubernetes.io/projected/820dc798-ef25-4bda-947f-8c66b290816d-kube-api-access-2msjg\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.894018 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-metrics-certs\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.895033 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0bc350-e279-4e74-a70e-c89593f115f3-config\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.895183 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb93f308-4554-41a0-a5c7-28d516a419c7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.896171 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/820dc798-ef25-4bda-947f-8c66b290816d-metrics-tls\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.896709 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e0bc350-e279-4e74-a70e-c89593f115f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.897168 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-metrics-certs\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.897222 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.897632 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c5e8be2-fe94-488c-801e-d1a56700bfa5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.897759 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-stats-auth\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.899442 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-default-certificate\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.899998 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w978\" (UniqueName: \"kubernetes.io/projected/64cf2ff9-40f4-48a5-a16c-6513cf0470bd-kube-api-access-2w978\") pod \"downloads-7954f5f757-6wmrp\" (UID: \"64cf2ff9-40f4-48a5-a16c-6513cf0470bd\") " pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.921816 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.942533 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.961840 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.986959 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.995970 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.001852 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.006011 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.021706 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.027772 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.063057 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.066334 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.077427 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.082093 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.083682 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.102032 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.110131 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.122987 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.125146 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.142366 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.147880 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.162719 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.164520 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.183201 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.188506 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.209361 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.215487 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.222855 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.243439 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.262998 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.268228 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.283709 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.302340 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.322693 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.327952 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.342365 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.344771 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.362695 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.383015 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.402233 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.423150 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.428081 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec989c54-8ec3-4f9d-87b0-2665776ffd15-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.443281 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.455242 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec989c54-8ec3-4f9d-87b0-2665776ffd15-config\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.463944 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.483151 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.502748 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.508463 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb93f308-4554-41a0-a5c7-28d516a419c7-proxy-tls\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.522880 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.544282 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.562829 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.598539 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.601891 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.605812 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1408f146-4652-41e3-8947-2f230e515750-trusted-ca\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.623656 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.643227 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.649732 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1408f146-4652-41e3-8947-2f230e515750-metrics-tls\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.651139 5008 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.651253 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config podName:6db03bb1-4833-4d3f-82d5-08ec5710251f nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.151222199 +0000 UTC m=+142.824076476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config") pod "machine-api-operator-5694c8668f-fsx74" (UID: "6db03bb1-4833-4d3f-82d5-08ec5710251f") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.662632 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.683632 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.701024 5008 request.go:700] Waited for 1.018087865s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0 Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.703843 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.716584 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b987d67-e424-4286-a25d-11bfc4d1e577-serving-cert\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.723082 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.744481 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.745605 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-config\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.772967 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.776688 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-trusted-ca\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.790686 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.801989 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.823458 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.842382 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.862354 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.868576 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.882459 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.893468 5008 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.893552 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert podName:98a7839a-3ca2-49f7-a330-f77ffc4e4da3 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.393528802 +0000 UTC m=+143.066383049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" (UID: "98a7839a-3ca2-49f7-a330-f77ffc4e4da3") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.893564 5008 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.893683 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle podName:657b37ac-43ff-4309-9bfa-5220bccb08c0 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.393648285 +0000 UTC m=+143.066502592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle") pod "service-ca-9c57cc56f-w2lv5" (UID: "657b37ac-43ff-4309-9bfa-5220bccb08c0") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894833 5008 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894889 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert podName:c9bc5b93-0c42-401c-8ca5-e5154e8be34d nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.394875048 +0000 UTC m=+143.067729295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert") pod "packageserver-d55dfcdfc-j8wt8" (UID: "c9bc5b93-0c42-401c-8ca5-e5154e8be34d") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894905 5008 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894971 5008 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.895003 5008 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894990 5008 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894998 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs podName:0b6fe31f-5401-4a2e-bccb-e57fab2a35ba nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.39497461 +0000 UTC m=+143.067828917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs") pod "multus-admission-controller-857f4d67dd-cb6xn" (UID: "0b6fe31f-5401-4a2e-bccb-e57fab2a35ba") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.895092 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key podName:657b37ac-43ff-4309-9bfa-5220bccb08c0 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.395080713 +0000 UTC m=+143.067934970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key") pod "service-ca-9c57cc56f-w2lv5" (UID: "657b37ac-43ff-4309-9bfa-5220bccb08c0") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.895110 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert podName:c9bc5b93-0c42-401c-8ca5-e5154e8be34d nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.395100543 +0000 UTC m=+143.067954790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert") pod "packageserver-d55dfcdfc-j8wt8" (UID: "c9bc5b93-0c42-401c-8ca5-e5154e8be34d") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.895124 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config podName:98a7839a-3ca2-49f7-a330-f77ffc4e4da3 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.395117424 +0000 UTC m=+143.067971771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" (UID: "98a7839a-3ca2-49f7-a330-f77ffc4e4da3") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.902731 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.921970 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.943101 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.962499 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.982172 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.002133 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.022511 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.041764 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.047449 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.062532 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.083514 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.103664 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.122446 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.123446 5008 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-console/downloads-7954f5f757-6wmrp" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.123553 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.145197 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.183637 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.203543 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.214600 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.223060 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.243894 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.263817 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.283347 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.303350 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.323350 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.342535 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.358541 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6wmrp"] Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.365911 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 15:29:59 crc kubenswrapper[5008]: W0129 15:29:59.367439 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64cf2ff9_40f4_48a5_a16c_6513cf0470bd.slice/crio-9abd198d8b241b24280129834e1f5180fb259afc84a75988e07119fc2a4ada66 WatchSource:0}: Error finding container 9abd198d8b241b24280129834e1f5180fb259afc84a75988e07119fc2a4ada66: Status 404 returned error can't find the container with id 9abd198d8b241b24280129834e1f5180fb259afc84a75988e07119fc2a4ada66 Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.382936 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.403645 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.418913 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.418965 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.418991 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.419090 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.419117 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.419408 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.419488 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.421443 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.423237 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.423714 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.423909 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.424035 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.424578 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.442347 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.462154 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.472494 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.473012 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.482439 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.522566 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cdqj\" (UniqueName: \"kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.538260 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pz26\" (UniqueName: \"kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.558605 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nktwv\" (UniqueName: \"kubernetes.io/projected/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-kube-api-access-nktwv\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.585487 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcl2c\" (UniqueName: \"kubernetes.io/projected/1c37e4bb-792b-4317-87ae-ca4172740500-kube-api-access-mcl2c\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.597138 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plh2t\" (UniqueName: \"kubernetes.io/projected/653b37fe-d452-4111-b27f-ef75530abe41-kube-api-access-plh2t\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.620080 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqdxf\" (UniqueName: \"kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.635122 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4bmd\" (UniqueName: \"kubernetes.io/projected/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-kube-api-access-v4bmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.654174 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.656157 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc2mb\" (UniqueName: \"kubernetes.io/projected/696d81dd-3f1a-4c58-ae69-29fff54e590b-kube-api-access-xc2mb\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.690010 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.697926 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.707230 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrrl8\" (UniqueName: \"kubernetes.io/projected/8d495a4f-d952-4050-a895-e6650c083e0d-kube-api-access-rrrl8\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.707697 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.721103 5008 request.go:700] Waited for 1.870537953s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.721461 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r42b\" (UniqueName: \"kubernetes.io/projected/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-kube-api-access-8r42b\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.727705 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.733137 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.742517 5008 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.763067 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.768742 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.782933 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.798033 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.802664 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.805652 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.814719 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.823207 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.856575 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfxpn\" (UniqueName: \"kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.875764 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpph\" (UniqueName: \"kubernetes.io/projected/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-kube-api-access-bgpph\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.892359 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.899188 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsqb8\" (UniqueName: \"kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.916347 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.920291 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.928317 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.939856 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec989c54-8ec3-4f9d-87b0-2665776ffd15-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.949049 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.956102 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6lwz\" (UniqueName: \"kubernetes.io/projected/657b37ac-43ff-4309-9bfa-5220bccb08c0-kube-api-access-r6lwz\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.979298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.996002 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.000968 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7qjf\" (UniqueName: \"kubernetes.io/projected/5b987d67-e424-4286-a25d-11bfc4d1e577-kube-api-access-r7qjf\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.006042 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.009712 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.025165 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng4c5\" (UniqueName: \"kubernetes.io/projected/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-kube-api-access-ng4c5\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.042475 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmjd2\" (UniqueName: \"kubernetes.io/projected/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-kube-api-access-cmjd2\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.054068 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0bc350-e279-4e74-a70e-c89593f115f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.077573 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf25c\" (UniqueName: \"kubernetes.io/projected/3c5e8be2-fe94-488c-801e-d1a56700bfa5-kube-api-access-rf25c\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.097358 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kqn\" (UniqueName: \"kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.120329 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.120688 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2msjg\" (UniqueName: \"kubernetes.io/projected/820dc798-ef25-4bda-947f-8c66b290816d-kube-api-access-2msjg\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.127388 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.141734 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.142893 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.144953 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.145180 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsr8x\" (UniqueName: \"kubernetes.io/projected/cb93f308-4554-41a0-a5c7-28d516a419c7-kube-api-access-rsr8x\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.148634 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.165601 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5jrc\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-kube-api-access-d5jrc\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.182893 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f9xk\" (UniqueName: \"kubernetes.io/projected/380625b0-02b5-417a-bd1e-7ccf56f56059-kube-api-access-7f9xk\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.202411 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.202700 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.215277 5008 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.215400 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config podName:6db03bb1-4833-4d3f-82d5-08ec5710251f nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.21536919 +0000 UTC m=+144.888223467 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config") pod "machine-api-operator-5694c8668f-fsx74" (UID: "6db03bb1-4833-4d3f-82d5-08ec5710251f") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.221643 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.235327 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.263611 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6wmrp" event={"ID":"64cf2ff9-40f4-48a5-a16c-6513cf0470bd","Type":"ContainerStarted","Data":"9abd198d8b241b24280129834e1f5180fb259afc84a75988e07119fc2a4ada66"} Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.376522 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.376570 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.376684 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.376863 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377034 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377127 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.376876 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377609 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377667 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377718 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377771 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377838 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsm4s\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377894 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.378459 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.378497 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-srv-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.378614 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.379128 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:00.879099957 +0000 UTC m=+144.551954314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.379283 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.437866 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.468488 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng4mr\" (UniqueName: \"kubernetes.io/projected/00332b75-a73b-49c1-9b72-73445baccf6d-kube-api-access-ng4mr\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.480294 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.480424 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:00.980395429 +0000 UTC m=+144.653249706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.482624 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.482715 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fwml\" (UniqueName: \"kubernetes.io/projected/20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9-kube-api-access-4fwml\") pod \"migrator-59844c95c7-s5vvl\" (UID: \"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.482776 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n6q5\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-kube-api-access-7n6q5\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483144 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvxq\" (UniqueName: \"kubernetes.io/projected/a161323e-d13e-46da-b8bd-347b56ef5110-kube-api-access-pgvxq\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483352 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-registration-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483535 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-srv-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483649 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483776 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a161323e-d13e-46da-b8bd-347b56ef5110-metrics-tls\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483913 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484007 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484114 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-srv-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484235 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484284 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-certs\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484401 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkrwh\" (UniqueName: \"kubernetes.io/projected/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-kube-api-access-qkrwh\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484507 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3105b11-cb5b-4006-8f1b-17b90922d743-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484540 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a161323e-d13e-46da-b8bd-347b56ef5110-config-volume\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484570 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jf7\" (UniqueName: \"kubernetes.io/projected/e3105b11-cb5b-4006-8f1b-17b90922d743-kube-api-access-44jf7\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484684 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-csi-data-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484772 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-images\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484844 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhkzq\" (UniqueName: \"kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484933 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa595b2b-fee5-4e54-926b-40571cf2f472-cert\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485106 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/217f16d7-943b-4603-88fa-155377da9788-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485208 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-mountpoint-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485276 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/217f16d7-943b-4603-88fa-155377da9788-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485320 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485523 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485573 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6tk\" (UniqueName: \"kubernetes.io/projected/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-kube-api-access-sc6tk\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485633 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485713 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485754 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-socket-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485858 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/632f321e-e374-410c-9dc3-0aacadc97f3b-proxy-tls\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485892 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14210e2-42e9-45d9-8633-a5df1a863a9f-serving-cert\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485946 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-node-bootstrap-token\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14210e2-42e9-45d9-8633-a5df1a863a9f-config\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.486099 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2dsz\" (UniqueName: \"kubernetes.io/projected/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-kube-api-access-d2dsz\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.486132 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dg4s\" (UniqueName: \"kubernetes.io/projected/a14210e2-42e9-45d9-8633-a5df1a863a9f-kube-api-access-2dg4s\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487318 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487396 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwg7\" (UniqueName: \"kubernetes.io/projected/ed80deac-23a5-4504-af92-231afa07fd27-kube-api-access-gfwg7\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487540 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487584 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-plugins-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487622 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487682 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487718 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prgds\" (UniqueName: \"kubernetes.io/projected/aa595b2b-fee5-4e54-926b-40571cf2f472-kube-api-access-prgds\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.488165 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.488231 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsm4s\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.488293 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kgkr\" (UniqueName: \"kubernetes.io/projected/632f321e-e374-410c-9dc3-0aacadc97f3b-kube-api-access-8kgkr\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.488395 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lr4m\" (UniqueName: \"kubernetes.io/projected/5ca041e2-baff-40ee-8fc9-e9bc58aee628-kube-api-access-2lr4m\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.488438 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:00.988416239 +0000 UTC m=+144.661270586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.488489 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-profile-collector-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.490102 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.493571 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-srv-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.494385 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.495672 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.497241 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.499550 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.552397 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsm4s\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.559519 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.591183 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.591412 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.091388505 +0000 UTC m=+144.764242742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.591706 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/217f16d7-943b-4603-88fa-155377da9788-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.592607 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-mountpoint-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.593637 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/217f16d7-943b-4603-88fa-155377da9788-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.593706 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-mountpoint-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.593736 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/217f16d7-943b-4603-88fa-155377da9788-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.593751 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594230 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc6tk\" (UniqueName: \"kubernetes.io/projected/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-kube-api-access-sc6tk\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594278 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594302 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-socket-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594341 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/632f321e-e374-410c-9dc3-0aacadc97f3b-proxy-tls\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594359 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14210e2-42e9-45d9-8633-a5df1a863a9f-serving-cert\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594380 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-node-bootstrap-token\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594413 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14210e2-42e9-45d9-8633-a5df1a863a9f-config\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594434 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2dsz\" (UniqueName: \"kubernetes.io/projected/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-kube-api-access-d2dsz\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594451 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dg4s\" (UniqueName: \"kubernetes.io/projected/a14210e2-42e9-45d9-8633-a5df1a863a9f-kube-api-access-2dg4s\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594489 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwg7\" (UniqueName: \"kubernetes.io/projected/ed80deac-23a5-4504-af92-231afa07fd27-kube-api-access-gfwg7\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594516 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-plugins-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594532 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594575 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594598 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prgds\" (UniqueName: \"kubernetes.io/projected/aa595b2b-fee5-4e54-926b-40571cf2f472-kube-api-access-prgds\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594622 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kgkr\" (UniqueName: \"kubernetes.io/projected/632f321e-e374-410c-9dc3-0aacadc97f3b-kube-api-access-8kgkr\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594659 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-profile-collector-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594684 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lr4m\" (UniqueName: \"kubernetes.io/projected/5ca041e2-baff-40ee-8fc9-e9bc58aee628-kube-api-access-2lr4m\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594701 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fwml\" (UniqueName: \"kubernetes.io/projected/20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9-kube-api-access-4fwml\") pod \"migrator-59844c95c7-s5vvl\" (UID: \"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594740 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n6q5\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-kube-api-access-7n6q5\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594763 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgvxq\" (UniqueName: \"kubernetes.io/projected/a161323e-d13e-46da-b8bd-347b56ef5110-kube-api-access-pgvxq\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594821 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-registration-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594854 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a161323e-d13e-46da-b8bd-347b56ef5110-metrics-tls\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594871 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594921 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594948 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-srv-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594991 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-certs\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595021 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkrwh\" (UniqueName: \"kubernetes.io/projected/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-kube-api-access-qkrwh\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595022 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-socket-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595069 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3105b11-cb5b-4006-8f1b-17b90922d743-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595092 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a161323e-d13e-46da-b8bd-347b56ef5110-config-volume\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595116 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44jf7\" (UniqueName: \"kubernetes.io/projected/e3105b11-cb5b-4006-8f1b-17b90922d743-kube-api-access-44jf7\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595118 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-registration-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595189 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-csi-data-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595197 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-plugins-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595232 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-images\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.595598 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.095576684 +0000 UTC m=+144.768430991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.596806 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-csi-data-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.596828 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14210e2-42e9-45d9-8633-a5df1a863a9f-config\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.597439 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhkzq\" (UniqueName: \"kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.597488 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa595b2b-fee5-4e54-926b-40571cf2f472-cert\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.598298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.598415 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.598421 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-images\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.599083 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a161323e-d13e-46da-b8bd-347b56ef5110-config-volume\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.599194 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.601105 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-profile-collector-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.601283 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a161323e-d13e-46da-b8bd-347b56ef5110-metrics-tls\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.606910 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-node-bootstrap-token\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607184 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14210e2-42e9-45d9-8633-a5df1a863a9f-serving-cert\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/632f321e-e374-410c-9dc3-0aacadc97f3b-proxy-tls\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607344 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-srv-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607546 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607609 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-certs\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607930 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/217f16d7-943b-4603-88fa-155377da9788-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.608382 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3105b11-cb5b-4006-8f1b-17b90922d743-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.608575 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa595b2b-fee5-4e54-926b-40571cf2f472-cert\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.634365 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.641848 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc6tk\" (UniqueName: \"kubernetes.io/projected/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-kube-api-access-sc6tk\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.661045 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kgkr\" (UniqueName: \"kubernetes.io/projected/632f321e-e374-410c-9dc3-0aacadc97f3b-kube-api-access-8kgkr\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.678017 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fwml\" (UniqueName: \"kubernetes.io/projected/20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9-kube-api-access-4fwml\") pod \"migrator-59844c95c7-s5vvl\" (UID: \"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.698064 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.698280 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.198206382 +0000 UTC m=+144.871060619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.698398 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.699131 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.199118305 +0000 UTC m=+144.871972542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.701330 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.708355 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.744600 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n6q5\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-kube-api-access-7n6q5\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.745663 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwg7\" (UniqueName: \"kubernetes.io/projected/ed80deac-23a5-4504-af92-231afa07fd27-kube-api-access-gfwg7\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.756757 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prgds\" (UniqueName: \"kubernetes.io/projected/aa595b2b-fee5-4e54-926b-40571cf2f472-kube-api-access-prgds\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.759953 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.775972 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lr4m\" (UniqueName: \"kubernetes.io/projected/5ca041e2-baff-40ee-8fc9-e9bc58aee628-kube-api-access-2lr4m\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: W0129 15:30:00.791858 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod696d81dd_3f1a_4c58_ae69_29fff54e590b.slice/crio-766c295456432be9dc1224994442bbdfac4302ae1ac813849b4540a5a3403209 WatchSource:0}: Error finding container 766c295456432be9dc1224994442bbdfac4302ae1ac813849b4540a5a3403209: Status 404 returned error can't find the container with id 766c295456432be9dc1224994442bbdfac4302ae1ac813849b4540a5a3403209 Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.797915 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkrwh\" (UniqueName: \"kubernetes.io/projected/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-kube-api-access-qkrwh\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.799636 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.799794 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.29975983 +0000 UTC m=+144.972614067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.799977 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.800308 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.300298064 +0000 UTC m=+144.973152341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.820645 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44jf7\" (UniqueName: \"kubernetes.io/projected/e3105b11-cb5b-4006-8f1b-17b90922d743-kube-api-access-44jf7\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.837320 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2dsz\" (UniqueName: \"kubernetes.io/projected/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-kube-api-access-d2dsz\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.843843 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.859034 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dg4s\" (UniqueName: \"kubernetes.io/projected/a14210e2-42e9-45d9-8633-a5df1a863a9f-kube-api-access-2dg4s\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.899100 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhkzq\" (UniqueName: \"kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.902456 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.903013 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.402994273 +0000 UTC m=+145.075848520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.904128 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.926696 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgvxq\" (UniqueName: \"kubernetes.io/projected/a161323e-d13e-46da-b8bd-347b56ef5110-kube-api-access-pgvxq\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.927269 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.929562 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.938678 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.948959 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.959331 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.959558 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.966927 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.998989 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.001921 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.004536 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.004891 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.50487963 +0000 UTC m=+145.177733857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.032053 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.069278 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.106114 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.106431 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.606412517 +0000 UTC m=+145.279266754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.181409 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2dsnp"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.207453 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.207813 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.707775971 +0000 UTC m=+145.380649738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.227060 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w2lv5"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.232306 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.280167 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" event={"ID":"7d5c80c8-4e74-4618-96c0-8e76168ad709","Type":"ContainerStarted","Data":"877a7a5331b5add1273bcb856b0a6b558e22fc4ee16ab1f101067f85b3c64f92"} Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.282071 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6wmrp" event={"ID":"64cf2ff9-40f4-48a5-a16c-6513cf0470bd","Type":"ContainerStarted","Data":"b7c6360486afb3695d7f0cab5e94240be2d35122a76f5d2f164ac0cff78e316c"} Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.282494 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.282977 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" event={"ID":"cb93f308-4554-41a0-a5c7-28d516a419c7","Type":"ContainerStarted","Data":"48cc5b0c7577ca631f2af7126b9199d3db84603543952247236516fe60199dfd"} Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.284513 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.284554 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.286224 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" event={"ID":"696d81dd-3f1a-4c58-ae69-29fff54e590b","Type":"ContainerStarted","Data":"766c295456432be9dc1224994442bbdfac4302ae1ac813849b4540a5a3403209"} Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.287991 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" event={"ID":"8d495a4f-d952-4050-a895-e6650c083e0d","Type":"ContainerStarted","Data":"60e0f31c678f70981e70a492642eae649c71539fbf0605d0a371bacca465f83a"} Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.290991 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lkcrp" event={"ID":"380625b0-02b5-417a-bd1e-7ccf56f56059","Type":"ContainerStarted","Data":"7b5065932b2f00b6ed88c79311b771081ad7ec24f48aa25d546d34c280f791c7"} Jan 29 15:30:01 crc kubenswrapper[5008]: W0129 15:30:01.297612 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30a4c50c_34f7_4c9c_9cbd_baaf50ed16e1.slice/crio-06359078d405bd0e54235a406ebdf31eea4653e6c329abc798e56c3dfc469667 WatchSource:0}: Error finding container 06359078d405bd0e54235a406ebdf31eea4653e6c329abc798e56c3dfc469667: Status 404 returned error can't find the container with id 06359078d405bd0e54235a406ebdf31eea4653e6c329abc798e56c3dfc469667 Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.309235 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.309495 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.310549 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.810530032 +0000 UTC m=+145.483384279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.311602 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.368701 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wkn92"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.415599 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.416342 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.916324531 +0000 UTC m=+145.589178768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.428597 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.428720 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.448576 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.449821 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8"] Jan 29 15:30:01 crc kubenswrapper[5008]: W0129 15:30:01.507234 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf88f09ca_9a9f_4d6e_bb2f_f00d75ae11fb.slice/crio-945e699e59852dc812c44fd74b49c97c250fab60ad324066e0e2e1c3a950db2e WatchSource:0}: Error finding container 945e699e59852dc812c44fd74b49c97c250fab60ad324066e0e2e1c3a950db2e: Status 404 returned error can't find the container with id 945e699e59852dc812c44fd74b49c97c250fab60ad324066e0e2e1c3a950db2e Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.516835 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.517797 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.017756357 +0000 UTC m=+145.690610594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.617874 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.618206 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.118195587 +0000 UTC m=+145.791049824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.723278 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.723746 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.22372455 +0000 UTC m=+145.896578787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.824493 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.825364 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.32534961 +0000 UTC m=+145.998203837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.928165 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.928548 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.42852863 +0000 UTC m=+146.101382867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.995372 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.016394 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.029376 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.029694 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.529682099 +0000 UTC m=+146.202536336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.058899 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf56b5e44_f079_4c56_9e19_e09996979003.slice/crio-283a3b198b8ebcea901bee24ad0194d994a822693f8e2f8f5e5b86077a5737c1 WatchSource:0}: Error finding container 283a3b198b8ebcea901bee24ad0194d994a822693f8e2f8f5e5b86077a5737c1: Status 404 returned error can't find the container with id 283a3b198b8ebcea901bee24ad0194d994a822693f8e2f8f5e5b86077a5737c1 Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.130472 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.130724 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.630708434 +0000 UTC m=+146.303562671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.203332 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6wmrp" podStartSLOduration=124.203314595 podStartE2EDuration="2m4.203314595s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:02.202202116 +0000 UTC m=+145.875056353" watchObservedRunningTime="2026-01-29 15:30:02.203314595 +0000 UTC m=+145.876168832" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.232537 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.233014 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.733000023 +0000 UTC m=+146.405854260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.314171 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" event={"ID":"c9bc5b93-0c42-401c-8ca5-e5154e8be34d","Type":"ContainerStarted","Data":"f9fdd5e63506b623e7ac7fad8b3704775ab1baa47b2c8a054b36ef7c51f63734"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.314423 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" event={"ID":"c9bc5b93-0c42-401c-8ca5-e5154e8be34d","Type":"ContainerStarted","Data":"a94db479e2c3c28357dcdc8bd1f0553d527dc2b6d6b066269259da8b458dc0d6"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.317164 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" event={"ID":"696d81dd-3f1a-4c58-ae69-29fff54e590b","Type":"ContainerStarted","Data":"faddad3801b36a1b5efb7f021265ba0b8cf5ce6cc6212681d5448ba08c10d676"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.318250 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" event={"ID":"3e0bc350-e279-4e74-a70e-c89593f115f3","Type":"ContainerStarted","Data":"ba194aecb8b7b07da24347645e17594538b3bffb024abe9f2b10c66f8e58e0ae"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.320270 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" event={"ID":"657b37ac-43ff-4309-9bfa-5220bccb08c0","Type":"ContainerStarted","Data":"73abf826bcbd7e7504623e7b47699d195d27874fe60fd7928104048edbf5d2bf"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.320323 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" event={"ID":"657b37ac-43ff-4309-9bfa-5220bccb08c0","Type":"ContainerStarted","Data":"360187cae4c917b9123e7622621d57ac9a8bad205ce113f28ca8e357f786a76a"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.322507 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" event={"ID":"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1","Type":"ContainerStarted","Data":"06359078d405bd0e54235a406ebdf31eea4653e6c329abc798e56c3dfc469667"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.323511 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" event={"ID":"cb93f308-4554-41a0-a5c7-28d516a419c7","Type":"ContainerStarted","Data":"ecd556d3b48a990bce744b3530d2400e624783729add88ee057e582c469708cf"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.324123 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" event={"ID":"98a7839a-3ca2-49f7-a330-f77ffc4e4da3","Type":"ContainerStarted","Data":"856bd5b826873efd4ba7fb31e6a28bffee9b67efdc724753b10c4a2d1afe1c3c"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.324770 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" event={"ID":"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb","Type":"ContainerStarted","Data":"945e699e59852dc812c44fd74b49c97c250fab60ad324066e0e2e1c3a950db2e"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.326240 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" event={"ID":"8d495a4f-d952-4050-a895-e6650c083e0d","Type":"ContainerStarted","Data":"cd6d4f39442284946f16d7a0b792ec3e66de30e9dc56a9bdfd64c76f9b7148cd"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.327486 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" event={"ID":"7d5c80c8-4e74-4618-96c0-8e76168ad709","Type":"ContainerStarted","Data":"4c0c93394c1503334716279d33aab711196676ea784b3c3aa6166010a6b66a0e"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.327713 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.332268 5008 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fpmxk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.333917 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.333355 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.833335909 +0000 UTC m=+146.506190146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.333285 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.334650 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" event={"ID":"820dc798-ef25-4bda-947f-8c66b290816d","Type":"ContainerStarted","Data":"0b6fc6fe80c6bb0353c34b853cc6c54cd78d9c076665787a76bcc0efafcba012"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.334729 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" event={"ID":"820dc798-ef25-4bda-947f-8c66b290816d","Type":"ContainerStarted","Data":"53b0b2512c48956ec122d0b88b3c39c6dbd02e3557a3d71540e30ef4c1665b09"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.335241 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.335259 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" podStartSLOduration=124.335247099 podStartE2EDuration="2m4.335247099s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:02.33260486 +0000 UTC m=+146.005459107" watchObservedRunningTime="2026-01-29 15:30:02.335247099 +0000 UTC m=+146.008101336" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.336180 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.836163513 +0000 UTC m=+146.509017750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.340916 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qs6wx" event={"ID":"ed80deac-23a5-4504-af92-231afa07fd27","Type":"ContainerStarted","Data":"2df5f3001b1a9158190e6ec9b9ff492d2de9247fbaf7d8bfa0d6971c2a614273"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.348466 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lkcrp" event={"ID":"380625b0-02b5-417a-bd1e-7ccf56f56059","Type":"ContainerStarted","Data":"ec07f9f91e2751c1b8e9b75c9c2c6e1533e44fea17e8e88966dd9a07a4ccf470"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.351739 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" event={"ID":"f56b5e44-f079-4c56-9e19-e09996979003","Type":"ContainerStarted","Data":"283a3b198b8ebcea901bee24ad0194d994a822693f8e2f8f5e5b86077a5737c1"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.351942 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.352025 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.396739 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-lkcrp" podStartSLOduration=123.396717939 podStartE2EDuration="2m3.396717939s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:02.396181144 +0000 UTC m=+146.069035381" watchObservedRunningTime="2026-01-29 15:30:02.396717939 +0000 UTC m=+146.069572186" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.399149 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" podStartSLOduration=124.399140032 podStartE2EDuration="2m4.399140032s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:02.354518863 +0000 UTC m=+146.027373100" watchObservedRunningTime="2026-01-29 15:30:02.399140032 +0000 UTC m=+146.071994289" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.424907 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.434334 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.434392 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.436281 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.438014 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.937988579 +0000 UTC m=+146.610842876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.438577 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.441306 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.441348 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.459620 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zs2tk"] Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.460741 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b987d67_e424_4286_a25d_11bfc4d1e577.slice/crio-0b8cee56a36757113254bd0a8115cfe8d9b4af6f1d22f14ff0455c0b63a5f6ba WatchSource:0}: Error finding container 0b8cee56a36757113254bd0a8115cfe8d9b4af6f1d22f14ff0455c0b63a5f6ba: Status 404 returned error can't find the container with id 0b8cee56a36757113254bd0a8115cfe8d9b4af6f1d22f14ff0455c0b63a5f6ba Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.464928 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4l85w"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.466264 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cb6xn"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.475835 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v7r8x"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.483763 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.491217 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.495853 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.516383 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.532166 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.543750 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.545242 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.045227967 +0000 UTC m=+146.718082204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.558875 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.564275 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.575720 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-468fl"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.584965 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-p7nds"] Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.587720 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7473d665_3627_4470_a820_ebdbdc113587.slice/crio-744d2c5b14b18a0366937cb219697ae3c655391e7942e7c446395ce7d6b803ff WatchSource:0}: Error finding container 744d2c5b14b18a0366937cb219697ae3c655391e7942e7c446395ce7d6b803ff: Status 404 returned error can't find the container with id 744d2c5b14b18a0366937cb219697ae3c655391e7942e7c446395ce7d6b803ff Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.588609 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g9x2n"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.590524 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.593139 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.604899 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94"] Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.608015 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a4a04b_067c_43f1_b355_46161babe869.slice/crio-1e01f1c47448495ee747be64b54e9beedefe2ff7cb0493bf37d8a12ea3bb0a20 WatchSource:0}: Error finding container 1e01f1c47448495ee747be64b54e9beedefe2ff7cb0493bf37d8a12ea3bb0a20: Status 404 returned error can't find the container with id 1e01f1c47448495ee747be64b54e9beedefe2ff7cb0493bf37d8a12ea3bb0a20 Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.650114 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf3d6df4_e07e_4d72_b2b6_20dcb29700d7.slice/crio-f26ad938836f57f6ce9095bd6c0aed92071459715bd04cf319a04b353ef05a53 WatchSource:0}: Error finding container f26ad938836f57f6ce9095bd6c0aed92071459715bd04cf319a04b353ef05a53: Status 404 returned error can't find the container with id f26ad938836f57f6ce9095bd6c0aed92071459715bd04cf319a04b353ef05a53 Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.651746 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.652136 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.151973992 +0000 UTC m=+146.824828239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.652543 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.653168 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.153156683 +0000 UTC m=+146.826010920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.704232 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.753604 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.753917 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.25390053 +0000 UTC m=+146.926754767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.757883 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a912999_007c_495d_aaa3_857d76158a91.slice/crio-e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb WatchSource:0}: Error finding container e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb: Status 404 returned error can't find the container with id e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.767584 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.811554 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.828204 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.862020 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.862404 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.36238808 +0000 UTC m=+147.035242317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.907536 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fsx74"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.928695 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tw5d5"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.962807 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.963229 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.46321147 +0000 UTC m=+147.136065707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: W0129 15:30:03.000308 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod272fd84c_e1ec_47ce_a8dc_fb0573d1208c.slice/crio-9cd6c3af2f085fd20ceca6542f6e23c5f980afb4f8976332f21c1f6e5a3f9c95 WatchSource:0}: Error finding container 9cd6c3af2f085fd20ceca6542f6e23c5f980afb4f8976332f21c1f6e5a3f9c95: Status 404 returned error can't find the container with id 9cd6c3af2f085fd20ceca6542f6e23c5f980afb4f8976332f21c1f6e5a3f9c95 Jan 29 15:30:03 crc kubenswrapper[5008]: W0129 15:30:03.048444 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda161323e_d13e_46da_b8bd_347b56ef5110.slice/crio-7a2225925c0c07bab41a304c11b06395202c0115dd5e08fbdf79ca5be853a611 WatchSource:0}: Error finding container 7a2225925c0c07bab41a304c11b06395202c0115dd5e08fbdf79ca5be853a611: Status 404 returned error can't find the container with id 7a2225925c0c07bab41a304c11b06395202c0115dd5e08fbdf79ca5be853a611 Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.064757 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.065279 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.565260422 +0000 UTC m=+147.238114659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.165383 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.165845 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.665829585 +0000 UTC m=+147.338683822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.267164 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.267483 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.767470697 +0000 UTC m=+147.440324934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.369671 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.369874 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.869847167 +0000 UTC m=+147.542701404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.370386 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.370710 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.870702458 +0000 UTC m=+147.543556695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.370774 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" event={"ID":"1408f146-4652-41e3-8947-2f230e515750","Type":"ContainerStarted","Data":"7fd3bac13c8d6d623ec5ce8691ee565adf989abe6b4a9d696fc41378d51b54c1"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.370823 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" event={"ID":"1408f146-4652-41e3-8947-2f230e515750","Type":"ContainerStarted","Data":"62a4c862802455f7f13e84a4f5d43f1b0e2fb36f0296a54bcd5b45a113396b5a"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.372671 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" event={"ID":"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d","Type":"ContainerStarted","Data":"163684b7504773b63bd5adad40214b4960cfe011f40abdc9034978ac1e6139df"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.374214 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" event={"ID":"632f321e-e374-410c-9dc3-0aacadc97f3b","Type":"ContainerStarted","Data":"336e419022e770079f099785d1b181791219e135db5bd3ba119d808a509365d4"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.396684 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" event={"ID":"5b987d67-e424-4286-a25d-11bfc4d1e577","Type":"ContainerStarted","Data":"0b8cee56a36757113254bd0a8115cfe8d9b4af6f1d22f14ff0455c0b63a5f6ba"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.401336 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2rk6" event={"ID":"3f7de4a5-3819-41c0-9e2e-766dcff408bb","Type":"ContainerStarted","Data":"0d50d0b75f6e0f8a4026a940843934088791e81f1a0bc633f602d35cd43598eb"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.410020 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" event={"ID":"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb","Type":"ContainerStarted","Data":"a95caa66886156554c453682e623f6b46a194df2e3bcacdcc9b6c1208e8f9e27"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.430639 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" event={"ID":"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba","Type":"ContainerStarted","Data":"b6c734ddae850b020a8937b3c086e0456e1a5603348817d8875b69a322e1d4cb"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.434528 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" event={"ID":"ec989c54-8ec3-4f9d-87b0-2665776ffd15","Type":"ContainerStarted","Data":"b1f492a372d6eae470027fc505b85da5dcd1cc39903a5f647e52dfb3b2d873ca"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.435921 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" event={"ID":"b1a4a04b-067c-43f1-b355-46161babe869","Type":"ContainerStarted","Data":"1e01f1c47448495ee747be64b54e9beedefe2ff7cb0493bf37d8a12ea3bb0a20"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.440223 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.440280 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.455975 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" event={"ID":"f56b5e44-f079-4c56-9e19-e09996979003","Type":"ContainerStarted","Data":"8a58e85619a9d68ab7ca1c73646da4750ac77969c5d738aeb0d3b0851d9dc82e"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.456949 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.459249 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" event={"ID":"3e0bc350-e279-4e74-a70e-c89593f115f3","Type":"ContainerStarted","Data":"87926bebfd41473ef5acb541830b9fae196b3d4f84efe83c9867c94af1c84690"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.467420 5008 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4zwkl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.467469 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.468982 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" event={"ID":"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce","Type":"ContainerStarted","Data":"8d07e1f320ec80ca7ae40d7dd78e3fb623341ff7bca3b744228d15d6a44094c2"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.469014 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" event={"ID":"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce","Type":"ContainerStarted","Data":"722c047341be5f0f9b650010f9f21dcad960b41633cc80d83490742446b5f6c5"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.470677 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" event={"ID":"e3105b11-cb5b-4006-8f1b-17b90922d743","Type":"ContainerStarted","Data":"73f6d1b44636709ff4b14e56b2dddc17b510c87ed09c684f23c5478e481c98d4"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.472330 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qs6wx" event={"ID":"ed80deac-23a5-4504-af92-231afa07fd27","Type":"ContainerStarted","Data":"d1858076cc9c0d4595b98d460d6b6ce088c202486ac2e5b777caebf358b1b004"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.474030 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" podStartSLOduration=125.474017313 podStartE2EDuration="2m5.474017313s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.42920303 +0000 UTC m=+147.102057267" watchObservedRunningTime="2026-01-29 15:30:03.474017313 +0000 UTC m=+147.146871560" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.474112 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podStartSLOduration=124.474109156 podStartE2EDuration="2m4.474109156s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.473600333 +0000 UTC m=+147.146454570" watchObservedRunningTime="2026-01-29 15:30:03.474109156 +0000 UTC m=+147.146963403" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.477212 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.478257 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.978240014 +0000 UTC m=+147.651094251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.478992 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" event={"ID":"a14210e2-42e9-45d9-8633-a5df1a863a9f","Type":"ContainerStarted","Data":"94279afa3975feaf12b417ce18986133f731f91bdcd91225d93fa3677504f600"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.481839 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" event={"ID":"820dc798-ef25-4bda-947f-8c66b290816d","Type":"ContainerStarted","Data":"eaa35375549041df26c5c2562e481b702aa80b65ed6196f86f72e81a93c0ef28"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.483155 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" event={"ID":"98a7839a-3ca2-49f7-a330-f77ffc4e4da3","Type":"ContainerStarted","Data":"3b34e358490b24ea840ff744cc1313b6fb6efc2a6401f73c0f711942fb851192"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.500416 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" event={"ID":"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7","Type":"ContainerStarted","Data":"f26ad938836f57f6ce9095bd6c0aed92071459715bd04cf319a04b353ef05a53"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.502160 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" event={"ID":"00332b75-a73b-49c1-9b72-73445baccf6d","Type":"ContainerStarted","Data":"b733768ba86559d686adf72003f41b4761850c81887b6cabc93a0692634ef414"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.505262 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tw5d5" event={"ID":"a161323e-d13e-46da-b8bd-347b56ef5110","Type":"ContainerStarted","Data":"7a2225925c0c07bab41a304c11b06395202c0115dd5e08fbdf79ca5be853a611"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.506197 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" event={"ID":"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9","Type":"ContainerStarted","Data":"5663bfb52f3262c7efcc8eb03f615bfc6f226e4272bbbf5e73e8b69e357d20cb"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.508967 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" event={"ID":"6db03bb1-4833-4d3f-82d5-08ec5710251f","Type":"ContainerStarted","Data":"4921a3d56c7fa08f67856e16dd8430555752f54f44d7bc78fe71cbcdf760a6dc"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.511186 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" podStartSLOduration=124.511164396 podStartE2EDuration="2m4.511164396s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.508967579 +0000 UTC m=+147.181821816" watchObservedRunningTime="2026-01-29 15:30:03.511164396 +0000 UTC m=+147.184018633" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.511485 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" event={"ID":"272fd84c-e1ec-47ce-a8dc-fb0573d1208c","Type":"ContainerStarted","Data":"9cd6c3af2f085fd20ceca6542f6e23c5f980afb4f8976332f21c1f6e5a3f9c95"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.511570 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qs6wx" podStartSLOduration=6.511563447 podStartE2EDuration="6.511563447s" podCreationTimestamp="2026-01-29 15:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.493852893 +0000 UTC m=+147.166707160" watchObservedRunningTime="2026-01-29 15:30:03.511563447 +0000 UTC m=+147.184417684" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.517194 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" event={"ID":"217f16d7-943b-4603-88fa-155377da9788","Type":"ContainerStarted","Data":"dfba1fc312a36bee852844d68251d314f733f9dac3c325e151c142c3787b0de9"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.519601 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p7nds" event={"ID":"aa595b2b-fee5-4e54-926b-40571cf2f472","Type":"ContainerStarted","Data":"195c3f24829bfaf34921e251b99a6f3bdae50c2b9262173c00cead0ae583e0b9"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.521804 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" event={"ID":"3c5e8be2-fe94-488c-801e-d1a56700bfa5","Type":"ContainerStarted","Data":"100ecffc6cff9494691eabff05729c4d5b7c0766f0e736a4cc1be50aa03aa882"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.521857 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" event={"ID":"3c5e8be2-fe94-488c-801e-d1a56700bfa5","Type":"ContainerStarted","Data":"327173dfc0d4a0283c57ab91db8bf6bfaf7d338be803aaada8937111649f350b"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.523850 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" event={"ID":"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1","Type":"ContainerStarted","Data":"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.524383 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.525279 5008 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6zjns container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" start-of-body= Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.525319 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.545112 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" podStartSLOduration=124.545087514 podStartE2EDuration="2m4.545087514s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.529183157 +0000 UTC m=+147.202037414" watchObservedRunningTime="2026-01-29 15:30:03.545087514 +0000 UTC m=+147.217941761" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.548974 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" event={"ID":"4adf65cb-4f11-4061-bcb5-71c3d9b890f7","Type":"ContainerStarted","Data":"402d302b4932479374dd27184aa53d55585f96a1840b5fcb7e2d79bb208c3ae4"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.556104 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" event={"ID":"cb93f308-4554-41a0-a5c7-28d516a419c7","Type":"ContainerStarted","Data":"0d939ce01262d7645bf7f7f25b58669fa679b6b6aa223e47946a2b36751b1d53"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.559571 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" event={"ID":"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277","Type":"ContainerStarted","Data":"e3d6b86a7668acbc7b03cdc7f635fbf977789cd2e641c90b388de06d57416348"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.566363 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" event={"ID":"4a912999-007c-495d-aaa3-857d76158a91","Type":"ContainerStarted","Data":"e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.569964 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" podStartSLOduration=124.569951366 podStartE2EDuration="2m4.569951366s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.55065894 +0000 UTC m=+147.223513177" watchObservedRunningTime="2026-01-29 15:30:03.569951366 +0000 UTC m=+147.242805603" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.570477 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" event={"ID":"653b37fe-d452-4111-b27f-ef75530abe41","Type":"ContainerStarted","Data":"f6fbbe2d489f924541978c5eb7db46c1df1746d94d1a6044b1c931f8e41a1780"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.570588 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" podStartSLOduration=124.570584182 podStartE2EDuration="2m4.570584182s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.569636937 +0000 UTC m=+147.242491194" watchObservedRunningTime="2026-01-29 15:30:03.570584182 +0000 UTC m=+147.243438419" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.578493 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.581246 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.081224841 +0000 UTC m=+147.754079148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.582928 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" event={"ID":"5ca041e2-baff-40ee-8fc9-e9bc58aee628","Type":"ContainerStarted","Data":"f96f93669d3d81dd721b4badc2ba7048ef6f1363d70a87e0a835ce8e7ff42513"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.594770 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" event={"ID":"1c37e4bb-792b-4317-87ae-ca4172740500","Type":"ContainerStarted","Data":"8261c2974da97e6c65209b7eb7ac686cc65f4f3389e21ef6308e7bdc35698547"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.594859 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" event={"ID":"1c37e4bb-792b-4317-87ae-ca4172740500","Type":"ContainerStarted","Data":"b8b44f3c1bdb03e7c9f1bd11c59ce79debe7026a884d1cd10a95c60fbd40cce7"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.598775 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" event={"ID":"7473d665-3627-4470-a820-ebdbdc113587","Type":"ContainerStarted","Data":"744d2c5b14b18a0366937cb219697ae3c655391e7942e7c446395ce7d6b803ff"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.600278 5008 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fpmxk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.600332 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.614836 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" podStartSLOduration=124.61481975 podStartE2EDuration="2m4.61481975s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.614239435 +0000 UTC m=+147.287093692" watchObservedRunningTime="2026-01-29 15:30:03.61481975 +0000 UTC m=+147.287673987" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.680112 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.681060 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.181033814 +0000 UTC m=+147.853888121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.784100 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.784595 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.284578685 +0000 UTC m=+147.957432932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.888264 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.888396 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.388367432 +0000 UTC m=+148.061221679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.889666 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.890266 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.390247971 +0000 UTC m=+148.063102248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.991556 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.992009 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.491981535 +0000 UTC m=+148.164835782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.992190 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.992623 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.492611601 +0000 UTC m=+148.165465838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.096232 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.096384 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.596359458 +0000 UTC m=+148.269213695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.096853 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.097121 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.597112467 +0000 UTC m=+148.269966704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.197550 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.197774 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.697741882 +0000 UTC m=+148.370596119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.198176 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.198543 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.698532113 +0000 UTC m=+148.371386350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.299439 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.299914 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.799891336 +0000 UTC m=+148.472745573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.401527 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.401886 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.901874786 +0000 UTC m=+148.574729023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.446441 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:04 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:04 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:04 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.446515 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.502290 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.502444 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.002422089 +0000 UTC m=+148.675276326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.503006 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.503339 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.003321012 +0000 UTC m=+148.676175249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.603941 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.604198 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.104150952 +0000 UTC m=+148.777005229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.604358 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.604770 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.104755588 +0000 UTC m=+148.777609815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.608937 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" event={"ID":"7473d665-3627-4470-a820-ebdbdc113587","Type":"ContainerStarted","Data":"8d7598ad2c3c5a660fb19d3ee369a6710759e6bbe8cbe47b3f02e5b7530f821c"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.609125 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.611128 5008 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4268l container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.611176 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.611648 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" event={"ID":"4a912999-007c-495d-aaa3-857d76158a91","Type":"ContainerStarted","Data":"74e48ee561dff74c0b937607b1d67f636544c839b5dfad578f5c993d847e004b"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.613124 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" event={"ID":"272fd84c-e1ec-47ce-a8dc-fb0573d1208c","Type":"ContainerStarted","Data":"391533279b80e4f5f53727ee47007e86e4298a4d570c7b399251dd3de6e7d292"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.613809 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.614804 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" event={"ID":"e3105b11-cb5b-4006-8f1b-17b90922d743","Type":"ContainerStarted","Data":"0d68e51992e60e13aa2ec240834f40001678fbd6640680f3d58ebe34a71c7d34"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.614837 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" event={"ID":"e3105b11-cb5b-4006-8f1b-17b90922d743","Type":"ContainerStarted","Data":"7f1a73c150ece73daa71b0bfe26d7b550d33ef87ca603f83e488c75bfe1df3c7"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.615829 5008 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mqnz8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.615879 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podUID="272fd84c-e1ec-47ce-a8dc-fb0573d1208c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.616314 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" event={"ID":"b1a4a04b-067c-43f1-b355-46161babe869","Type":"ContainerStarted","Data":"3e1d83d49207f7e8ce5235b5d25891dfd2e43340feba1d11402b5242e6b975a7"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.616432 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" podUID="b1a4a04b-067c-43f1-b355-46161babe869" containerName="collect-profiles" containerID="cri-o://3e1d83d49207f7e8ce5235b5d25891dfd2e43340feba1d11402b5242e6b975a7" gracePeriod=30 Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.620712 5008 generic.go:334] "Generic (PLEG): container finished" podID="4adf65cb-4f11-4061-bcb5-71c3d9b890f7" containerID="0a1a01356733e8fdcf29791389d756c3ebde2fc9de1824cec4875d7045e6d565" exitCode=0 Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.620839 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" event={"ID":"4adf65cb-4f11-4061-bcb5-71c3d9b890f7","Type":"ContainerDied","Data":"0a1a01356733e8fdcf29791389d756c3ebde2fc9de1824cec4875d7045e6d565"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.624579 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p7nds" event={"ID":"aa595b2b-fee5-4e54-926b-40571cf2f472","Type":"ContainerStarted","Data":"7d900cb3e39061652d19e25fcee4c156a7509c50ea1f253d827a31184a732862"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.628556 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" event={"ID":"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9","Type":"ContainerStarted","Data":"24be9116f08661dd2ec1ffb7c3811b0e9ea964d72aeb90b316f7eab89f80a3fd"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.628614 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" event={"ID":"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9","Type":"ContainerStarted","Data":"f25d5aefbd420ed5cf85b13633187a76a22fb95cc3f616f0f209c2dfbb186574"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.631094 5008 generic.go:334] "Generic (PLEG): container finished" podID="00332b75-a73b-49c1-9b72-73445baccf6d" containerID="3bfad117be29eee4bccfcaee08b906879445a7ed0a1bbcdc5632ce698e47ade9" exitCode=0 Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.631219 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" event={"ID":"00332b75-a73b-49c1-9b72-73445baccf6d","Type":"ContainerDied","Data":"3bfad117be29eee4bccfcaee08b906879445a7ed0a1bbcdc5632ce698e47ade9"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.632771 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" podStartSLOduration=125.632750321 podStartE2EDuration="2m5.632750321s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.630323436 +0000 UTC m=+147.303177703" watchObservedRunningTime="2026-01-29 15:30:04.632750321 +0000 UTC m=+148.305604558" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.634201 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" event={"ID":"632f321e-e374-410c-9dc3-0aacadc97f3b","Type":"ContainerStarted","Data":"3296cae282984c7e6920a454f45d67fa5e778e435d6dbd7baa1c7f2891ef7698"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.634250 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" event={"ID":"632f321e-e374-410c-9dc3-0aacadc97f3b","Type":"ContainerStarted","Data":"8a3fdc3b71ca79a1299d1e828bd840933f9809724fa7bd5c34abc069769ee2f0"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.634289 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podStartSLOduration=125.634278921 podStartE2EDuration="2m5.634278921s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.632617588 +0000 UTC m=+148.305471845" watchObservedRunningTime="2026-01-29 15:30:04.634278921 +0000 UTC m=+148.307133158" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.640002 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" event={"ID":"a14210e2-42e9-45d9-8633-a5df1a863a9f","Type":"ContainerStarted","Data":"1e0a0d9ca8dff4c21b5f79fbacec87777d92f4850a7d9e7c69963e8eca6ad82d"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.645849 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" event={"ID":"1408f146-4652-41e3-8947-2f230e515750","Type":"ContainerStarted","Data":"0d75173362fa52d9cc595b3470116e9df07384275cde3f5e4d7aa4ccbd9945e4"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.648712 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" event={"ID":"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277","Type":"ContainerStarted","Data":"5a1e2653892ae4b1ba274181729a0761b119f8409558bb4b94fe34fc6adcd12b"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.649118 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.650462 5008 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zvhxk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.650509 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" podUID="0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.650652 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" event={"ID":"ec989c54-8ec3-4f9d-87b0-2665776ffd15","Type":"ContainerStarted","Data":"1b82191d9c4944ce570495bcc0385f820b0df1ac7caeefc10b73c411e5f4e461"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.652420 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" event={"ID":"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba","Type":"ContainerStarted","Data":"bf02f6d83597a645783d7a4b36e0c926cbd336c8598ba779c42fe94294415f8f"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.652446 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" event={"ID":"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba","Type":"ContainerStarted","Data":"44786dbd89839bfaeeb45009ec688ded945e7d27a0f947c0e5d968e2ac0c9c82"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.655520 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podStartSLOduration=125.655510537 podStartE2EDuration="2m5.655510537s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.653399191 +0000 UTC m=+148.326253428" watchObservedRunningTime="2026-01-29 15:30:04.655510537 +0000 UTC m=+148.328364774" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.657081 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tw5d5" event={"ID":"a161323e-d13e-46da-b8bd-347b56ef5110","Type":"ContainerStarted","Data":"2bd500525eb46ec20a4b0e11ad856c6dccaa8b6e0c0742e92a837b87a9f961e3"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.660107 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2rk6" event={"ID":"3f7de4a5-3819-41c0-9e2e-766dcff408bb","Type":"ContainerStarted","Data":"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.663530 5008 generic.go:334] "Generic (PLEG): container finished" podID="653b37fe-d452-4111-b27f-ef75530abe41" containerID="103761a5e1810d875d78be2a091de722cf91467b2e894ae56cf0127f4867da60" exitCode=0 Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.663577 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" event={"ID":"653b37fe-d452-4111-b27f-ef75530abe41","Type":"ContainerDied","Data":"103761a5e1810d875d78be2a091de722cf91467b2e894ae56cf0127f4867da60"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.667125 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" event={"ID":"3c5e8be2-fe94-488c-801e-d1a56700bfa5","Type":"ContainerStarted","Data":"cd1a48045d8ac4b70ad1691f2d053ec8afe9c01194bcbe9830b15c4fe2e87ba3"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.671042 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-p7nds" podStartSLOduration=7.671023583 podStartE2EDuration="7.671023583s" podCreationTimestamp="2026-01-29 15:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.666863654 +0000 UTC m=+148.339717891" watchObservedRunningTime="2026-01-29 15:30:04.671023583 +0000 UTC m=+148.343877820" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.692766 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" event={"ID":"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7","Type":"ContainerStarted","Data":"8afae448fd06804663a482d0a781ad7f23f4ad9fbf2f57bda116e75b1bea36a1"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.701878 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" event={"ID":"8d495a4f-d952-4050-a895-e6650c083e0d","Type":"ContainerStarted","Data":"76fde1e7356564005a3d5c2e44cfd4e65aa26bf34cad6b298cc295a256ca252e"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.704019 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" event={"ID":"217f16d7-943b-4603-88fa-155377da9788","Type":"ContainerStarted","Data":"8b6da3bdc6d1eba4c81b206e8eb959228ded2c4354635ea2f17c3404bd13a2e5"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.705466 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.706460 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.20644568 +0000 UTC m=+148.879299917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.719880 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" event={"ID":"5b987d67-e424-4286-a25d-11bfc4d1e577","Type":"ContainerStarted","Data":"60eb230a415a5a7dbb7ada59496bbe501d736b44fa85bf2c654d2168bfa57b98"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.720671 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.735110 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" podStartSLOduration=126.73508956 podStartE2EDuration="2m6.73508956s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.732641116 +0000 UTC m=+148.405495363" watchObservedRunningTime="2026-01-29 15:30:04.73508956 +0000 UTC m=+148.407943807" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.743921 5008 patch_prober.go:28] interesting pod/console-operator-58897d9998-zs2tk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.743979 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podUID="5b987d67-e424-4286-a25d-11bfc4d1e577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.753991 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" event={"ID":"6db03bb1-4833-4d3f-82d5-08ec5710251f","Type":"ContainerStarted","Data":"6a05b6684e6b55217056921f9d150f3a111d0469bdd16f7be671021d94fbb59f"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.763036 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" event={"ID":"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d","Type":"ContainerStarted","Data":"1469fd1bbc8563be92335c838cbab649b37256c9755798768624d28bc156469e"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.772474 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" podStartSLOduration=4.761387288 podStartE2EDuration="4.761387288s" podCreationTimestamp="2026-01-29 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.755566776 +0000 UTC m=+148.428421013" watchObservedRunningTime="2026-01-29 15:30:04.761387288 +0000 UTC m=+148.434241555" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.774950 5008 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6zjns container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.775022 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.775301 5008 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4zwkl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.775419 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.777602 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" podStartSLOduration=125.777587183 podStartE2EDuration="2m5.777587183s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.774842381 +0000 UTC m=+148.447696608" watchObservedRunningTime="2026-01-29 15:30:04.777587183 +0000 UTC m=+148.450441420" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.797586 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" podStartSLOduration=125.797565716 podStartE2EDuration="2m5.797565716s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.794238629 +0000 UTC m=+148.467092866" watchObservedRunningTime="2026-01-29 15:30:04.797565716 +0000 UTC m=+148.470419953" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.825389 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.826272 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.326252627 +0000 UTC m=+148.999106944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.845878 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podStartSLOduration=126.845862231 podStartE2EDuration="2m6.845862231s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.827325565 +0000 UTC m=+148.500179823" watchObservedRunningTime="2026-01-29 15:30:04.845862231 +0000 UTC m=+148.518716468" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.870382 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" podStartSLOduration=125.870358732 podStartE2EDuration="2m5.870358732s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.847739789 +0000 UTC m=+148.520594026" watchObservedRunningTime="2026-01-29 15:30:04.870358732 +0000 UTC m=+148.543212979" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.901260 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" podStartSLOduration=125.90123522 podStartE2EDuration="2m5.90123522s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.873278148 +0000 UTC m=+148.546132385" watchObservedRunningTime="2026-01-29 15:30:04.90123522 +0000 UTC m=+148.574089467" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.929730 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.930943 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.430908158 +0000 UTC m=+149.103762395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.954442 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" podStartSLOduration=125.954418082 podStartE2EDuration="2m5.954418082s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.936327018 +0000 UTC m=+148.609181255" watchObservedRunningTime="2026-01-29 15:30:04.954418082 +0000 UTC m=+148.627272319" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.983995 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" podStartSLOduration=126.983977936 podStartE2EDuration="2m6.983977936s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.957773381 +0000 UTC m=+148.630627618" watchObservedRunningTime="2026-01-29 15:30:04.983977936 +0000 UTC m=+148.656832173" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.984204 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" podStartSLOduration=126.984199293 podStartE2EDuration="2m6.984199293s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.982648382 +0000 UTC m=+148.655502629" watchObservedRunningTime="2026-01-29 15:30:04.984199293 +0000 UTC m=+148.657053540" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.006477 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" podStartSLOduration=127.006457476 podStartE2EDuration="2m7.006457476s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.004025612 +0000 UTC m=+148.676879869" watchObservedRunningTime="2026-01-29 15:30:05.006457476 +0000 UTC m=+148.679311723" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.032121 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.032468 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.532457386 +0000 UTC m=+149.205311613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.051463 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-g2rk6" podStartSLOduration=127.051449963 podStartE2EDuration="2m7.051449963s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.051212096 +0000 UTC m=+148.724066353" watchObservedRunningTime="2026-01-29 15:30:05.051449963 +0000 UTC m=+148.724304200" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.079940 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" podStartSLOduration=126.079922338 podStartE2EDuration="2m6.079922338s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.078393539 +0000 UTC m=+148.751247776" watchObservedRunningTime="2026-01-29 15:30:05.079922338 +0000 UTC m=+148.752776585" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.133165 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.133588 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.633571783 +0000 UTC m=+149.306426020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.141842 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" podStartSLOduration=127.141825819 podStartE2EDuration="2m7.141825819s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.103061095 +0000 UTC m=+148.775915332" watchObservedRunningTime="2026-01-29 15:30:05.141825819 +0000 UTC m=+148.814680056" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.142125 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" podStartSLOduration=127.142120617 podStartE2EDuration="2m7.142120617s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.135928325 +0000 UTC m=+148.808782552" watchObservedRunningTime="2026-01-29 15:30:05.142120617 +0000 UTC m=+148.814974854" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.161900 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" podStartSLOduration=127.161881915 podStartE2EDuration="2m7.161881915s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.1598048 +0000 UTC m=+148.832659047" watchObservedRunningTime="2026-01-29 15:30:05.161881915 +0000 UTC m=+148.834736152" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.234569 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.234951 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.734934427 +0000 UTC m=+149.407788664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.335422 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.335575 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.835543072 +0000 UTC m=+149.508397309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.335662 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.335945 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.835935472 +0000 UTC m=+149.508789709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.437101 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.437382 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.937332986 +0000 UTC m=+149.610187263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.437514 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.437847 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.93783432 +0000 UTC m=+149.610688557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.441236 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:05 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:05 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:05 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.441277 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.538259 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.538432 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.038410992 +0000 UTC m=+149.711265229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.538606 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.538926 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.038916786 +0000 UTC m=+149.711771023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.639321 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.639474 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.139440818 +0000 UTC m=+149.812295095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.639569 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.639910 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.13989671 +0000 UTC m=+149.812750947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.740246 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.740447 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.240402081 +0000 UTC m=+149.913256358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.740525 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.741033 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.241003657 +0000 UTC m=+149.913857944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.768483 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tw5d5" event={"ID":"a161323e-d13e-46da-b8bd-347b56ef5110","Type":"ContainerStarted","Data":"8a3fd0c545cecffef0eefee384beab5dfdc354a553d84845f204cb0ed3f9d3f5"} Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.770384 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" event={"ID":"6db03bb1-4833-4d3f-82d5-08ec5710251f","Type":"ContainerStarted","Data":"1e260b26f54be54fccd58ded45998736fb21255fd1fbad49025c25da64de58b4"} Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.772249 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" event={"ID":"00332b75-a73b-49c1-9b72-73445baccf6d","Type":"ContainerStarted","Data":"ada8cd4946b6f3f363e70e26539d7ad5c75f7dff04f50ead2bda78d440c0a541"} Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.786843 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29494995-x4n8l_b1a4a04b-067c-43f1-b355-46161babe869/collect-profiles/0.log" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.786914 5008 generic.go:334] "Generic (PLEG): container finished" podID="b1a4a04b-067c-43f1-b355-46161babe869" containerID="3e1d83d49207f7e8ce5235b5d25891dfd2e43340feba1d11402b5242e6b975a7" exitCode=2 Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.787140 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" event={"ID":"b1a4a04b-067c-43f1-b355-46161babe869","Type":"ContainerDied","Data":"3e1d83d49207f7e8ce5235b5d25891dfd2e43340feba1d11402b5242e6b975a7"} Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788740 5008 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4zwkl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788763 5008 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zvhxk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788822 5008 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4268l container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788860 5008 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mqnz8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788882 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podUID="272fd84c-e1ec-47ce-a8dc-fb0573d1208c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788880 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788790 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788827 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" podUID="0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.789144 5008 patch_prober.go:28] interesting pod/console-operator-58897d9998-zs2tk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.789563 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podUID="5b987d67-e424-4286-a25d-11bfc4d1e577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.791560 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.806043 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" podStartSLOduration=126.806027739 podStartE2EDuration="2m6.806027739s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.805478584 +0000 UTC m=+149.478332821" watchObservedRunningTime="2026-01-29 15:30:05.806027739 +0000 UTC m=+149.478881976" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.838208 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" podStartSLOduration=126.838185541 podStartE2EDuration="2m6.838185541s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.837038061 +0000 UTC m=+149.509892318" watchObservedRunningTime="2026-01-29 15:30:05.838185541 +0000 UTC m=+149.511039778" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.844958 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.847033 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.347012272 +0000 UTC m=+150.019866509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.898395 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" podStartSLOduration=126.898378257 podStartE2EDuration="2m6.898378257s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.876108564 +0000 UTC m=+149.548962811" watchObservedRunningTime="2026-01-29 15:30:05.898378257 +0000 UTC m=+149.571232504" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.899215 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" podStartSLOduration=126.899208099 podStartE2EDuration="2m6.899208099s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.8977152 +0000 UTC m=+149.570569467" watchObservedRunningTime="2026-01-29 15:30:05.899208099 +0000 UTC m=+149.572062346" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.947629 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.948097 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.448081988 +0000 UTC m=+150.120936225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.049143 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.049357 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.549333449 +0000 UTC m=+150.222187686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.049520 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.049940 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.549930785 +0000 UTC m=+150.222785022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.151038 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.151206 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.651183236 +0000 UTC m=+150.324037553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.151300 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.151659 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.651649588 +0000 UTC m=+150.324503825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.252334 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.252518 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.752471217 +0000 UTC m=+150.425325454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.252940 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.253358 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.75334796 +0000 UTC m=+150.426202197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.353929 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.354331 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.854290413 +0000 UTC m=+150.527144670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.354523 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.371846 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.442017 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:06 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:06 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:06 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.442134 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.456363 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.456420 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.456484 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.456533 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.457031 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.957009233 +0000 UTC m=+150.629863660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.462911 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.462927 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.557518 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.557761 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.057719669 +0000 UTC m=+150.730573916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.558129 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.558491 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.058479829 +0000 UTC m=+150.731334266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.656888 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.659141 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.159112374 +0000 UTC m=+150.831966611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.659690 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.660186 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.660600 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.160590023 +0000 UTC m=+150.833444360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.666610 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.762493 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.763388 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.263362783 +0000 UTC m=+150.936217020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.797613 5008 patch_prober.go:28] interesting pod/console-operator-58897d9998-zs2tk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.797638 5008 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mqnz8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.797673 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podUID="5b987d67-e424-4286-a25d-11bfc4d1e577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.797696 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podUID="272fd84c-e1ec-47ce-a8dc-fb0573d1208c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.865126 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.865762 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.365748024 +0000 UTC m=+151.038602261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.872535 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" podStartSLOduration=127.872513681 podStartE2EDuration="2m7.872513681s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.922076647 +0000 UTC m=+149.594930904" watchObservedRunningTime="2026-01-29 15:30:06.872513681 +0000 UTC m=+150.545367918" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.966006 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.966189 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.466164483 +0000 UTC m=+151.139018740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.966258 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.966633 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.466621746 +0000 UTC m=+151.139475983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.066967 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.067306 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.5672877 +0000 UTC m=+151.240141937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.168983 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.169411 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.669393214 +0000 UTC m=+151.342247471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.270513 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.270874 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.770853531 +0000 UTC m=+151.443707778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.371958 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.372455 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.872430839 +0000 UTC m=+151.545285076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.441364 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:07 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:07 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:07 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.441442 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.472515 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.472713 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.972676614 +0000 UTC m=+151.645530851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.472903 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.473193 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.973180997 +0000 UTC m=+151.646035244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.573519 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.573537 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.073513814 +0000 UTC m=+151.746368051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.574075 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.574437 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.074429359 +0000 UTC m=+151.747283596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.675039 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.675403 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.175385952 +0000 UTC m=+151.848240189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.777750 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.780577 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.280549145 +0000 UTC m=+151.953403392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.878840 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.879283 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.379262779 +0000 UTC m=+152.052117016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.980548 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.980946 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.480931711 +0000 UTC m=+152.153785948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.081646 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.081910 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.581882504 +0000 UTC m=+152.254736741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.081982 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.082316 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.582298645 +0000 UTC m=+152.255152882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.182908 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.183113 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.683079603 +0000 UTC m=+152.355933850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.183343 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.183949 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.683919586 +0000 UTC m=+152.356773833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.284307 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.284757 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.784731565 +0000 UTC m=+152.457585832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.386501 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.387245 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.887208488 +0000 UTC m=+152.560062775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.441047 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:08 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:08 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:08 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.441115 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.487921 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.488349 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.988325335 +0000 UTC m=+152.661179592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.590227 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.590653 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.090636324 +0000 UTC m=+152.763490581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.691986 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.692589 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.192563173 +0000 UTC m=+152.865417450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.794042 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.794592 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.294572924 +0000 UTC m=+152.967427191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.895489 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.895975 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.395957237 +0000 UTC m=+153.068811474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.997236 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.997724 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.497709172 +0000 UTC m=+153.170563409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.098692 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.099017 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.598967753 +0000 UTC m=+153.271822000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.099228 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.099576 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.599557729 +0000 UTC m=+153.272411966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.125101 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.125170 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.128974 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.129048 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.200459 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.201006 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.700985374 +0000 UTC m=+153.373839611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.302447 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.302879 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.802857471 +0000 UTC m=+153.475711708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.403669 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.404172 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.904143883 +0000 UTC m=+153.576998150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.441386 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:09 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:09 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:09 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.441463 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.506037 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.506734 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.006707289 +0000 UTC m=+153.679561566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.607836 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.608161 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.108141674 +0000 UTC m=+153.780995911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.655605 5008 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fpmxk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.655669 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.709735 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.710169 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.210153655 +0000 UTC m=+153.883007902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.798859 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.800012 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.802106 5008 patch_prober.go:28] interesting pod/console-f9d7485db-g2rk6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.802155 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2rk6" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.810444 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.810813 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.310760949 +0000 UTC m=+153.983615226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.810924 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.811387 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.311374325 +0000 UTC m=+153.984228562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.816147 5008 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4zwkl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.816202 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.911880 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.912080 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.412053011 +0000 UTC m=+154.084907248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.912236 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.912678 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.412658357 +0000 UTC m=+154.085512594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.921257 5008 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6zjns container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.921344 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.013562 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.013811 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.513753954 +0000 UTC m=+154.186608191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.014006 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.014477 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.514463442 +0000 UTC m=+154.187317689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.115482 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.115825 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.615758874 +0000 UTC m=+154.288613151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.115930 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.116386 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.6163678 +0000 UTC m=+154.289222037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.217352 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.217690 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.717643311 +0000 UTC m=+154.390497538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.217899 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.218282 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.718250348 +0000 UTC m=+154.391104585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.319530 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.319850 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.819760605 +0000 UTC m=+154.492614842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.319951 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.320428 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.820418753 +0000 UTC m=+154.493272990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.377931 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378412 5008 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4268l container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378481 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378427 5008 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4268l container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378531 5008 patch_prober.go:28] interesting pod/console-operator-58897d9998-zs2tk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378564 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378644 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podUID="5b987d67-e424-4286-a25d-11bfc4d1e577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.379245 5008 patch_prober.go:28] interesting pod/console-operator-58897d9998-zs2tk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.379556 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podUID="5b987d67-e424-4286-a25d-11bfc4d1e577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380303 5008 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j8wt8 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380340 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" podUID="c9bc5b93-0c42-401c-8ca5-e5154e8be34d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380349 5008 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j8wt8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380395 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" podUID="c9bc5b93-0c42-401c-8ca5-e5154e8be34d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380825 5008 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j8wt8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380850 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" podUID="c9bc5b93-0c42-401c-8ca5-e5154e8be34d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.424630 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.424992 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.924935218 +0000 UTC m=+154.597789455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.425201 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.427901 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.927878596 +0000 UTC m=+154.600732863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.438233 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.441234 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:10 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:10 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:10 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.441292 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.526157 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.526335 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.026312223 +0000 UTC m=+154.699166470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.526813 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.527154 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.027142884 +0000 UTC m=+154.699997121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.628464 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.629201 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.129178576 +0000 UTC m=+154.802032823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.732259 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.232233424 +0000 UTC m=+154.905087741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.731778 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.834177 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.834355 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.334319127 +0000 UTC m=+155.007173374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.834962 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.835408 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.335396676 +0000 UTC m=+155.008251013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.861815 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.906992 5008 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mqnz8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.907079 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podUID="272fd84c-e1ec-47ce-a8dc-fb0573d1208c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.907163 5008 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mqnz8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.907200 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podUID="272fd84c-e1ec-47ce-a8dc-fb0573d1208c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.920115 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.921154 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.924469 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.936365 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.937839 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.938182 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.438162786 +0000 UTC m=+155.111017033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.946732 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.958879 5008 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zvhxk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.958953 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" podUID="0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.958996 5008 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zvhxk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.959074 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" podUID="0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.032152 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.033012 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.034578 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.039323 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.039378 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.039435 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.039827 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.539812848 +0000 UTC m=+155.212667085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.044018 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.048825 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.140940 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141086 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.141395 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.641160721 +0000 UTC m=+155.314014958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141469 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141603 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141553 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141720 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141843 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.142178 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.642155627 +0000 UTC m=+155.315009864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.148525 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.175057 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.244484 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.244740 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.744692421 +0000 UTC m=+155.417546658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.244820 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.244860 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.244905 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.245348 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.245588 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.745560744 +0000 UTC m=+155.418414981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.271309 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.271468 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.348091 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.348974 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.350023 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.849984418 +0000 UTC m=+155.522838675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.454930 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:11 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:11 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:11 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.454997 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.456047 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.462206 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.962188115 +0000 UTC m=+155.635042352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.560950 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.561276 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.061253159 +0000 UTC m=+155.734107406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.561559 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.561958 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.061947678 +0000 UTC m=+155.734801915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: W0129 15:30:11.592630 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-9d04fe921496c7bdd7823f1588d749d1c623576ca9dc6e035670fc249e6120ed WatchSource:0}: Error finding container 9d04fe921496c7bdd7823f1588d749d1c623576ca9dc6e035670fc249e6120ed: Status 404 returned error can't find the container with id 9d04fe921496c7bdd7823f1588d749d1c623576ca9dc6e035670fc249e6120ed Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.662968 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.663238 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.163183528 +0000 UTC m=+155.836037785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.663346 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.663701 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.163684801 +0000 UTC m=+155.836539238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.765203 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.765355 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.765616 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.265580489 +0000 UTC m=+155.938434726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: W0129 15:30:11.778458 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5864482d_142b_4ab3_a5e1_d48e89d3dde0.slice/crio-aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc WatchSource:0}: Error finding container aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc: Status 404 returned error can't find the container with id aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.816489 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.820940 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" event={"ID":"4adf65cb-4f11-4061-bcb5-71c3d9b890f7","Type":"ContainerStarted","Data":"ae0ff5f28e7a513d7ddd669bd2c1d28678a491dedac61848acb0aa0f9238ab51"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.821856 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6e2e89d4dd1bed8000cf6c6ddd761ad75f85e4c768b0da1e57589771bdb83f8e"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.823654 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" event={"ID":"653b37fe-d452-4111-b27f-ef75530abe41","Type":"ContainerStarted","Data":"ce7657427cf40ffcbee6a3dd4452793f8588fce59d300c77555b154e47a25d54"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.824483 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5864482d-142b-4ab3-a5e1-d48e89d3dde0","Type":"ContainerStarted","Data":"aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.825281 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9d04fe921496c7bdd7823f1588d749d1c623576ca9dc6e035670fc249e6120ed"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.826083 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0595a070517962b338e483f65eb3819bc102b989f320901899588945e4149f1a"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.826201 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.840771 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-tw5d5" podStartSLOduration=14.840751767 podStartE2EDuration="14.840751767s" podCreationTimestamp="2026-01-29 15:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:11.839125415 +0000 UTC m=+155.511979662" watchObservedRunningTime="2026-01-29 15:30:11.840751767 +0000 UTC m=+155.513606004" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.866718 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.867314 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.367296792 +0000 UTC m=+156.040151029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.968356 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.970137 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.470110544 +0000 UTC m=+156.142964781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.070383 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.070841 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.57081931 +0000 UTC m=+156.243673547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.171949 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.172313 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.672273306 +0000 UTC m=+156.345127563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.172802 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.173436 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.673422016 +0000 UTC m=+156.346276243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.273755 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.274248 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.774226996 +0000 UTC m=+156.447081243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.375479 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.375806 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.875794625 +0000 UTC m=+156.548648862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.442038 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:12 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:12 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:12 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.442102 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.476536 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.476725 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.976699077 +0000 UTC m=+156.649553304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.476837 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.477152 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.977144868 +0000 UTC m=+156.649999105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.505760 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29494995-x4n8l_b1a4a04b-067c-43f1-b355-46161babe869/collect-profiles/0.log" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.505834 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.578061 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsqb8\" (UniqueName: \"kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8\") pod \"b1a4a04b-067c-43f1-b355-46161babe869\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.578181 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.578207 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume\") pod \"b1a4a04b-067c-43f1-b355-46161babe869\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.578252 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume\") pod \"b1a4a04b-067c-43f1-b355-46161babe869\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.579193 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume" (OuterVolumeSpecName: "config-volume") pod "b1a4a04b-067c-43f1-b355-46161babe869" (UID: "b1a4a04b-067c-43f1-b355-46161babe869"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.579367 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.079328444 +0000 UTC m=+156.752182701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.591371 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8" (OuterVolumeSpecName: "kube-api-access-tsqb8") pod "b1a4a04b-067c-43f1-b355-46161babe869" (UID: "b1a4a04b-067c-43f1-b355-46161babe869"). InnerVolumeSpecName "kube-api-access-tsqb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.591423 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b1a4a04b-067c-43f1-b355-46161babe869" (UID: "b1a4a04b-067c-43f1-b355-46161babe869"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.679570 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.679677 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsqb8\" (UniqueName: \"kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.679694 5008 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.679706 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.680082 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.180063721 +0000 UTC m=+156.852918028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.780671 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.780844 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.280813319 +0000 UTC m=+156.953667566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.780941 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.781319 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.281307742 +0000 UTC m=+156.954161979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.832116 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29494995-x4n8l_b1a4a04b-067c-43f1-b355-46161babe869/collect-profiles/0.log" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.832506 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" event={"ID":"b1a4a04b-067c-43f1-b355-46161babe869","Type":"ContainerDied","Data":"1e01f1c47448495ee747be64b54e9beedefe2ff7cb0493bf37d8a12ea3bb0a20"} Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.832544 5008 scope.go:117] "RemoveContainer" containerID="3e1d83d49207f7e8ce5235b5d25891dfd2e43340feba1d11402b5242e6b975a7" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.832557 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.834429 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b4af13c-49f7-4c06-840c-6e976b55fabd","Type":"ContainerStarted","Data":"a017c760d370789ae6b77ac576c3c8c398bd726ece1c0385f34120f6300e19d6"} Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.854815 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" podStartSLOduration=134.854797276 podStartE2EDuration="2m14.854797276s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:12.854484698 +0000 UTC m=+156.527338955" watchObservedRunningTime="2026-01-29 15:30:12.854797276 +0000 UTC m=+156.527651513" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.862114 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.867007 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.882595 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.882764 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.382744038 +0000 UTC m=+157.055598275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.882877 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.883146 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.383138188 +0000 UTC m=+157.055992425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.984405 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.984634 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.484601235 +0000 UTC m=+157.157455472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.984827 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.985172 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.485155659 +0000 UTC m=+157.158009896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.005525 5008 patch_prober.go:28] interesting pod/dns-default-tw5d5 container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.44:8181/ready\": dial tcp 10.217.0.44:8181: connect: connection refused" start-of-body= Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.005649 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-tw5d5" podUID="a161323e-d13e-46da-b8bd-347b56ef5110" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.44:8181/ready\": dial tcp 10.217.0.44:8181: connect: connection refused" Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.085964 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.086202 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.586167053 +0000 UTC m=+157.259021310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.086267 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.086678 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.586668607 +0000 UTC m=+157.259522914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.187657 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.187844 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.687813725 +0000 UTC m=+157.360667972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.187952 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.188265 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.688255376 +0000 UTC m=+157.361109683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.288884 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.289037 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.789013974 +0000 UTC m=+157.461868211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.289213 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.289570 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.789558718 +0000 UTC m=+157.462412955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.338202 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a4a04b-067c-43f1-b355-46161babe869" path="/var/lib/kubelet/pods/b1a4a04b-067c-43f1-b355-46161babe869/volumes" Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.390245 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.390455 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.890428829 +0000 UTC m=+157.563283066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.390516 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.390922 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.890905692 +0000 UTC m=+157.563759929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.450766 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:13 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:13 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:13 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.450854 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.491817 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.492004 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.991976158 +0000 UTC m=+157.664830405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.492185 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.492510 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.992494652 +0000 UTC m=+157.665348889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.593353 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.593518 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.093488736 +0000 UTC m=+157.766342973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.593698 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.594033 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.09402506 +0000 UTC m=+157.766879297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.694881 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.695013 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.194985643 +0000 UTC m=+157.867839880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.695232 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.695555 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.195548018 +0000 UTC m=+157.868402255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.797025 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.797256 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.29722366 +0000 UTC m=+157.970077907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.797745 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.798207 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.298189936 +0000 UTC m=+157.971044173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.842378 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5864482d-142b-4ab3-a5e1-d48e89d3dde0","Type":"ContainerStarted","Data":"e632b499faf44559f02951cba34ddb7f268053890e895e7ed971208eb91b44b2"} Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.847552 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"28d436bfdd643f9abcc9f49d58a2cbaeb6a404fe87976cc84ba7055feb5b14d2"} Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.849547 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"70ff31cba9dd56eb2b8af86640fc062e012a297d6820348d5f14dba195688194"} Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.851818 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b4af13c-49f7-4c06-840c-6e976b55fabd","Type":"ContainerStarted","Data":"bb0965dd4b0d6c0d8a2795d7d5ac66432f61305e0643d73fc376449b614177d2"} Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.854233 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"90d7c62feea83e4c216393c180437aec60cdde116ef4613c896fbf55aa635e4d"} Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.893250 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" podStartSLOduration=134.893228524 podStartE2EDuration="2m14.893228524s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:13.890181163 +0000 UTC m=+157.563035400" watchObservedRunningTime="2026-01-29 15:30:13.893228524 +0000 UTC m=+157.566082761" Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.899532 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.899692 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.399667432 +0000 UTC m=+158.072521679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.899770 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.900084 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.400072332 +0000 UTC m=+158.072926579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.990864 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.990939 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.002432 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.003706 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.503684525 +0000 UTC m=+158.176538772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.104151 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.104580 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.604560627 +0000 UTC m=+158.277414934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.205404 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.205608 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.705571932 +0000 UTC m=+158.378426169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.205754 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.206070 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.706062094 +0000 UTC m=+158.378916331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.307012 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.307212 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.807175211 +0000 UTC m=+158.480029448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.307308 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.307639 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.807630964 +0000 UTC m=+158.480485201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.408427 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.408623 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.908596307 +0000 UTC m=+158.581450544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.408755 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.409133 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.90911873 +0000 UTC m=+158.581972967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.445582 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:14 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:14 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:14 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.445647 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.510479 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.510626 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.010599578 +0000 UTC m=+158.683453815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.510764 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.511139 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.011128301 +0000 UTC m=+158.683982538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.612056 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.612532 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.112495745 +0000 UTC m=+158.785349992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.612584 5008 csr.go:261] certificate signing request csr-5kkjf is approved, waiting to be issued Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.623980 5008 csr.go:257] certificate signing request csr-5kkjf is issued Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.716144 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.716536 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.216523089 +0000 UTC m=+158.889377326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.734304 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.734370 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.736684 5008 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-n2sqt container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.736737 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" podUID="4adf65cb-4f11-4061-bcb5-71c3d9b890f7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.817591 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.817739 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.317715768 +0000 UTC m=+158.990570015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.818195 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.818569 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.31855859 +0000 UTC m=+158.991412827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.878138 5008 generic.go:334] "Generic (PLEG): container finished" podID="5b4af13c-49f7-4c06-840c-6e976b55fabd" containerID="bb0965dd4b0d6c0d8a2795d7d5ac66432f61305e0643d73fc376449b614177d2" exitCode=0 Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.878259 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b4af13c-49f7-4c06-840c-6e976b55fabd","Type":"ContainerDied","Data":"bb0965dd4b0d6c0d8a2795d7d5ac66432f61305e0643d73fc376449b614177d2"} Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.880218 5008 generic.go:334] "Generic (PLEG): container finished" podID="4a912999-007c-495d-aaa3-857d76158a91" containerID="74e48ee561dff74c0b937607b1d67f636544c839b5dfad578f5c993d847e004b" exitCode=0 Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.880311 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" event={"ID":"4a912999-007c-495d-aaa3-857d76158a91","Type":"ContainerDied","Data":"74e48ee561dff74c0b937607b1d67f636544c839b5dfad578f5c993d847e004b"} Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.883016 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" event={"ID":"653b37fe-d452-4111-b27f-ef75530abe41","Type":"ContainerStarted","Data":"c0afd54cc1c889ad21a3bff4c006b825538ef035b544e57d61f4e726cb2a6c30"} Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.883488 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.919185 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.919416 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.419368539 +0000 UTC m=+159.092222776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.919493 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.919829 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.419814262 +0000 UTC m=+159.092668499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.021471 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.022649 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.522631293 +0000 UTC m=+159.195485530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.123758 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.124416 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.624384657 +0000 UTC m=+159.297238994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.225028 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.225309 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.725263429 +0000 UTC m=+159.398117666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.225984 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.226591 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.726568853 +0000 UTC m=+159.399423090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.327334 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.327668 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.827610298 +0000 UTC m=+159.500464535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.429334 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.429966 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.929943717 +0000 UTC m=+159.602798144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.443318 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:15 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:15 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:15 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.443413 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.530690 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.530867 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.030826348 +0000 UTC m=+159.703680585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.530983 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.531434 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.031399954 +0000 UTC m=+159.704254191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.625585 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 15:25:14 +0000 UTC, rotation deadline is 2026-12-23 12:16:42.298277108 +0000 UTC Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.626092 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7868h46m26.672191546s for next certificate rotation Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.632610 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.632872 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.132835419 +0000 UTC m=+159.805689666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.632990 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.633374 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.133361903 +0000 UTC m=+159.806216330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.636129 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638236 5008 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-468fl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638295 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" podUID="00332b75-a73b-49c1-9b72-73445baccf6d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638203 5008 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-468fl container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638441 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" podUID="00332b75-a73b-49c1-9b72-73445baccf6d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638899 5008 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-468fl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638973 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" podUID="00332b75-a73b-49c1-9b72-73445baccf6d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.734566 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.734773 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.234750038 +0000 UTC m=+159.907604275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.735017 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.735388 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.235375994 +0000 UTC m=+159.908230231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.836506 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.836733 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.336700257 +0000 UTC m=+160.009554504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.836867 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.837236 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.33722466 +0000 UTC m=+160.010078987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.891081 5008 generic.go:334] "Generic (PLEG): container finished" podID="5864482d-142b-4ab3-a5e1-d48e89d3dde0" containerID="e632b499faf44559f02951cba34ddb7f268053890e895e7ed971208eb91b44b2" exitCode=0 Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.891167 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5864482d-142b-4ab3-a5e1-d48e89d3dde0","Type":"ContainerDied","Data":"e632b499faf44559f02951cba34ddb7f268053890e895e7ed971208eb91b44b2"} Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.893001 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" event={"ID":"5ca041e2-baff-40ee-8fc9-e9bc58aee628","Type":"ContainerStarted","Data":"3ef06d541e3be44327ec0ce8f76deb7bf993de18e835a436e7d79c91a5c19e31"} Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.940976 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.941154 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.441128601 +0000 UTC m=+160.113982838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.941243 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.941545 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.441530141 +0000 UTC m=+160.114384378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.976611 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" podStartSLOduration=137.97658789 podStartE2EDuration="2m17.97658789s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:15.973250442 +0000 UTC m=+159.646104709" watchObservedRunningTime="2026-01-29 15:30:15.97658789 +0000 UTC m=+159.649442137" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.006341 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.045743 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.047553 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.547523466 +0000 UTC m=+160.220377883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.147669 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.148390 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.648370316 +0000 UTC m=+160.321224563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.248288 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.249015 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.748995831 +0000 UTC m=+160.421850068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.301073 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.301347 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a4a04b-067c-43f1-b355-46161babe869" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.301370 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a4a04b-067c-43f1-b355-46161babe869" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.301487 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a4a04b-067c-43f1-b355-46161babe869" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.303184 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.306322 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.355817 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.361564 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.861529577 +0000 UTC m=+160.534383814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.378751 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.417606 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.434321 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.434557 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4af13c-49f7-4c06-840c-6e976b55fabd" containerName="pruner" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.434570 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4af13c-49f7-4c06-840c-6e976b55fabd" containerName="pruner" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.434581 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a912999-007c-495d-aaa3-857d76158a91" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.434588 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a912999-007c-495d-aaa3-857d76158a91" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.434689 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4af13c-49f7-4c06-840c-6e976b55fabd" containerName="pruner" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.434707 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a912999-007c-495d-aaa3-857d76158a91" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.435436 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.438201 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.438837 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.447608 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:16 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:16 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:16 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.447680 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.456262 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.462542 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.462820 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.462877 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dldqp\" (UniqueName: \"kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.462971 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.463151 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.963121427 +0000 UTC m=+160.635975664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564135 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir\") pod \"5b4af13c-49f7-4c06-840c-6e976b55fabd\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564196 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5b4af13c-49f7-4c06-840c-6e976b55fabd" (UID: "5b4af13c-49f7-4c06-840c-6e976b55fabd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564222 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume\") pod \"4a912999-007c-495d-aaa3-857d76158a91\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564249 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume\") pod \"4a912999-007c-495d-aaa3-857d76158a91\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564305 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhkzq\" (UniqueName: \"kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq\") pod \"4a912999-007c-495d-aaa3-857d76158a91\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564341 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access\") pod \"5b4af13c-49f7-4c06-840c-6e976b55fabd\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564475 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564502 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8q2q\" (UniqueName: \"kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564536 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564559 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dldqp\" (UniqueName: \"kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564590 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564629 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564662 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564705 5008 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.565062 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.565247 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a912999-007c-495d-aaa3-857d76158a91" (UID: "4a912999-007c-495d-aaa3-857d76158a91"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.565542 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.565899 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.065885167 +0000 UTC m=+160.738739404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.573396 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5b4af13c-49f7-4c06-840c-6e976b55fabd" (UID: "5b4af13c-49f7-4c06-840c-6e976b55fabd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.579050 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq" (OuterVolumeSpecName: "kube-api-access-nhkzq") pod "4a912999-007c-495d-aaa3-857d76158a91" (UID: "4a912999-007c-495d-aaa3-857d76158a91"). InnerVolumeSpecName "kube-api-access-nhkzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.593128 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a912999-007c-495d-aaa3-857d76158a91" (UID: "4a912999-007c-495d-aaa3-857d76158a91"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.603194 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dldqp\" (UniqueName: \"kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.627187 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.629103 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.660030 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.667546 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.667762 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.167720544 +0000 UTC m=+160.840574791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.667970 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668002 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8q2q\" (UniqueName: \"kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668067 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668104 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668144 5008 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668156 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668166 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhkzq\" (UniqueName: \"kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668175 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.668557 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.168543756 +0000 UTC m=+160.841397993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668766 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668970 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.718800 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.720864 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8q2q\" (UniqueName: \"kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.769608 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.769946 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.269917769 +0000 UTC m=+160.942772006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.770202 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.770246 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.770297 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5sl4\" (UniqueName: \"kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.770339 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.770461 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.770998 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.270972367 +0000 UTC m=+160.943826604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.828055 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.829216 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.858743 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.888952 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889304 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889378 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889434 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5sl4\" (UniqueName: \"kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889476 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889541 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btkm4\" (UniqueName: \"kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889673 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.890264 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.890361 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.390341242 +0000 UTC m=+161.063195479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.891302 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.949662 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5sl4\" (UniqueName: \"kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.955205 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.957660 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.958945 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b4af13c-49f7-4c06-840c-6e976b55fabd","Type":"ContainerDied","Data":"a017c760d370789ae6b77ac576c3c8c398bd726ece1c0385f34120f6300e19d6"} Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.958992 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a017c760d370789ae6b77ac576c3c8c398bd726ece1c0385f34120f6300e19d6" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.979281 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.979791 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" event={"ID":"4a912999-007c-495d-aaa3-857d76158a91","Type":"ContainerDied","Data":"e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb"} Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.979835 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.001431 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.001551 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btkm4\" (UniqueName: \"kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.001701 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.001765 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.002353 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.003035 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.003189 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.503166397 +0000 UTC m=+161.176020634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.042987 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btkm4\" (UniqueName: \"kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.107117 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.107533 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.607512738 +0000 UTC m=+161.280366975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.149548 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.198036 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.212560 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.212908 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.712896688 +0000 UTC m=+161.385750925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.314444 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.314650 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.81461142 +0000 UTC m=+161.487465657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.314800 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.315124 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.815112504 +0000 UTC m=+161.487966741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.339193 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.415888 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.416191 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.91617544 +0000 UTC m=+161.589029677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.441729 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:17 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:17 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:17 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.441775 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.519455 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.519962 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.019945117 +0000 UTC m=+161.692799354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.573950 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.578718 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.622318 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.622450 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir\") pod \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.622477 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access\") pod \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.623394 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5864482d-142b-4ab3-a5e1-d48e89d3dde0" (UID: "5864482d-142b-4ab3-a5e1-d48e89d3dde0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.623447 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.123407445 +0000 UTC m=+161.796261682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.653909 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.723621 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.724021 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.224002059 +0000 UTC m=+161.896856296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.724245 5008 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.825318 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.825513 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.325483516 +0000 UTC m=+161.998337753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.825604 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.825899 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.325887616 +0000 UTC m=+161.998741853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.926232 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.926520 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.426505251 +0000 UTC m=+162.099359488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.985322 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerStarted","Data":"dd8d6696ceba57808730ee9b74baad13f0f3efae19998fb92ff0c2c357522c56"} Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.986307 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerStarted","Data":"54d6cf905ba0c9c55baea0b1bbde4338656f4661c2571ae702fdc0067f3ef4cb"} Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.987479 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5864482d-142b-4ab3-a5e1-d48e89d3dde0","Type":"ContainerDied","Data":"aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc"} Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.987505 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.987554 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.988680 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerStarted","Data":"af3e1a3fc6fe6b714e3700dd86c4612e0716f599f6f3f8cae393165561ce5bfe"} Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.027055 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.027354 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.527343641 +0000 UTC m=+162.200197878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.128589 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.128732 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.628712145 +0000 UTC m=+162.301566382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.129232 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.129567 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.629558077 +0000 UTC m=+162.302412314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.168352 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5864482d-142b-4ab3-a5e1-d48e89d3dde0" (UID: "5864482d-142b-4ab3-a5e1-d48e89d3dde0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.234872 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.734848093 +0000 UTC m=+162.407702340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.234922 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.235154 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.235617 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.735604283 +0000 UTC m=+162.408458520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.235969 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.336617 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.337057 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.837017068 +0000 UTC m=+162.509871485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.337812 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.338302 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.838278451 +0000 UTC m=+162.511132688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.426609 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.426933 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5864482d-142b-4ab3-a5e1-d48e89d3dde0" containerName="pruner" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.426948 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5864482d-142b-4ab3-a5e1-d48e89d3dde0" containerName="pruner" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.427062 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5864482d-142b-4ab3-a5e1-d48e89d3dde0" containerName="pruner" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.430603 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.433435 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.433650 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.438481 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.438618 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.938597658 +0000 UTC m=+162.611451905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.438736 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.439041 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.939033429 +0000 UTC m=+162.611887666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.440421 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:18 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:18 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:18 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.440465 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.539986 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.540193 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.040155076 +0000 UTC m=+162.713009313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.540250 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.540305 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.540442 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftbd9\" (UniqueName: \"kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.540556 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.540885 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.040826204 +0000 UTC m=+162.713680441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.641878 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.642079 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.142047924 +0000 UTC m=+162.814902171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642113 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642177 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642231 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642327 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftbd9\" (UniqueName: \"kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642623 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642713 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.642882 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.142850576 +0000 UTC m=+162.815704813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.650523 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.673245 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftbd9\" (UniqueName: \"kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.743298 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.743489 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.24345663 +0000 UTC m=+162.916310867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.743550 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.743949 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.243937312 +0000 UTC m=+162.916791549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.817627 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.818753 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.820455 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.829333 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.859734 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.859925 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.359898398 +0000 UTC m=+163.032752675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.860182 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.860265 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw6k4\" (UniqueName: \"kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.860411 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.860529 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.861170 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.361148521 +0000 UTC m=+163.034002798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.961849 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.962029 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.461997001 +0000 UTC m=+163.134851238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.962095 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.962441 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.462428312 +0000 UTC m=+163.135282539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.963396 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.963469 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.963510 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw6k4\" (UniqueName: \"kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.964527 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.964752 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.991927 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw6k4\" (UniqueName: \"kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.998168 5008 generic.go:334] "Generic (PLEG): container finished" podID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerID="2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f" exitCode=0 Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.998236 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerDied","Data":"2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.004670 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" event={"ID":"5ca041e2-baff-40ee-8fc9-e9bc58aee628","Type":"ContainerStarted","Data":"0b67f4499cb9c5f59f98a3ab23560adef52655f324dbef45543827963ba1b7c8"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.007365 5008 generic.go:334] "Generic (PLEG): container finished" podID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerID="e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9" exitCode=0 Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.007446 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerDied","Data":"e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.007496 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerStarted","Data":"616df5323044bc3ebd3a98d75f3ea061e944f69d5bc62803ba635bd69dee1996"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.011583 5008 generic.go:334] "Generic (PLEG): container finished" podID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerID="62b0c01ef29dcd7c7957aa7b9fba8ee02c41e66ab0221b57ac7769babd464e8c" exitCode=0 Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.011635 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerDied","Data":"62b0c01ef29dcd7c7957aa7b9fba8ee02c41e66ab0221b57ac7769babd464e8c"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.014357 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.018643 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerID="f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b" exitCode=0 Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.018698 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerDied","Data":"f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.064995 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.066529 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.566483736 +0000 UTC m=+163.239337973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.070283 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.129915 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.129980 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.130370 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.130435 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.137753 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.166461 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.166840 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.666827384 +0000 UTC m=+163.339681611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.239718 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.239985 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8q2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4dwdf_openshift-marketplace(d2d42845-cca1-4b60-bc84-4b2baebf702b): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.241133 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-4dwdf" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.250461 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.250591 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dldqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cwgw5_openshift-marketplace(6aebe040-289b-48c1-a825-f12b471a5ad6): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.251962 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-cwgw5" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.268196 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.268700 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.768682251 +0000 UTC m=+163.441536488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.362576 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.369515 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.369928 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.869911861 +0000 UTC m=+163.542766098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: W0129 15:30:19.372089 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37742fc9_fce4_41f0_ba04_7232b6e647a7.slice/crio-335be0a36e05771a7a88d81fee1b61fe29f073571f151738b87168e8e0776f1d WatchSource:0}: Error finding container 335be0a36e05771a7a88d81fee1b61fe29f073571f151738b87168e8e0776f1d: Status 404 returned error can't find the container with id 335be0a36e05771a7a88d81fee1b61fe29f073571f151738b87168e8e0776f1d Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.416656 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.417819 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.419623 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.431254 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.441418 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:19 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:19 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:19 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.441482 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.471273 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.471510 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.471546 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229kp\" (UniqueName: \"kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.471613 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.471752 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.971734797 +0000 UTC m=+163.644589034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.572853 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.572957 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.572999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229kp\" (UniqueName: \"kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.573031 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.573667 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.073653676 +0000 UTC m=+163.746507913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.573905 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.573989 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.598885 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229kp\" (UniqueName: \"kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.659398 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.674773 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.674918 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.174887146 +0000 UTC m=+163.847741383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.675065 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.675346 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.175332427 +0000 UTC m=+163.848186744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.741159 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.746516 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.769945 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.769986 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.775939 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.776154 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.276114566 +0000 UTC m=+163.948968803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.776388 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.777502 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.277487042 +0000 UTC m=+163.950341279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.798908 5008 patch_prober.go:28] interesting pod/console-f9d7485db-g2rk6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.798985 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2rk6" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.803113 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.826945 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.831959 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.834568 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.849883 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.877281 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.877967 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pfbb\" (UniqueName: \"kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.878009 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.377982713 +0000 UTC m=+164.050836950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.878067 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.878167 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.878203 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.880521 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.380503949 +0000 UTC m=+164.053358246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.940149 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.981298 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.981571 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfbb\" (UniqueName: \"kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.981740 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.981793 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.982893 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.482872739 +0000 UTC m=+164.155726976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.983170 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.001016 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.009173 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.041259 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerStarted","Data":"335be0a36e05771a7a88d81fee1b61fe29f073571f151738b87168e8e0776f1d"} Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.043903 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerStarted","Data":"57f282b94968e79e724bd40448547c7c110b5b3c35e9677aea1eb21b270ed1d9"} Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.053435 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfbb\" (UniqueName: \"kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.054296 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" containerID="cri-o://4c0c93394c1503334716279d33aab711196676ea784b3c3aa6166010a6b66a0e" gracePeriod=30 Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.057020 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4dwdf" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.057474 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.057712 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" containerID="cri-o://8a58e85619a9d68ab7ca1c73646da4750ac77969c5d738aeb0d3b0851d9dc82e" gracePeriod=30 Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.058884 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cwgw5" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.086808 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.089271 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.589241074 +0000 UTC m=+164.262095311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.116857 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.188403 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.189138 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.190148 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.688861122 +0000 UTC m=+164.361715359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.290322 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.290702 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.790688068 +0000 UTC m=+164.463542305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.327596 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.327725 5008 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4l85w container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]log ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]etcd ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/max-in-flight-filter ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 15:30:20 crc kubenswrapper[5008]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 15:30:20 crc kubenswrapper[5008]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 15:30:20 crc kubenswrapper[5008]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 15:30:20 crc kubenswrapper[5008]: livez check failed Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.327814 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" podUID="653b37fe-d452-4111-b27f-ef75530abe41" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.327869 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5sl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z9t2h_openshift-marketplace(250e7db8-88dd-44fd-8d73-51a6f8f4ba96): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.327917 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.328118 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btkm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h7vmc_openshift-marketplace(9bcecb83-1aec-4bd4-9b46-f02deb628018): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.330297 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-z9t2h" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.331388 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-h7vmc" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.391152 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.391719 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.891699953 +0000 UTC m=+164.564554190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.391903 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.394616 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.405587 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.459502 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:20 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:20 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:20 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.459563 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.493723 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.495029 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.995016118 +0000 UTC m=+164.667870355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.594616 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.594893 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.094859242 +0000 UTC m=+164.767713479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.595300 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.596012 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.096000582 +0000 UTC m=+164.768854809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.643655 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.696884 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.696998 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.196977926 +0000 UTC m=+164.869832163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.697167 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.697457 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.197449298 +0000 UTC m=+164.870303535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.799384 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.799845 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.299824898 +0000 UTC m=+164.972679145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.882567 5008 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.901478 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.902000 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.401981022 +0000 UTC m=+165.074835259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.911143 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.954069 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.003041 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.003298 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.503255225 +0000 UTC m=+165.176109462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.003484 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.003794 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.005225 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.505207755 +0000 UTC m=+165.178061992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.015119 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.051594 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerStarted","Data":"c7bb2d8d5dfc5bd460b51cbe8abe72fb7d9bc5d3e8c022f6997fb845b267cc34"} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.053527 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerID="b4ed1901a1ac7d83b698c4d263db5514ae2a4bf0aab0e1f9032c155913f5bd2d" exitCode=0 Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.053627 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerDied","Data":"b4ed1901a1ac7d83b698c4d263db5514ae2a4bf0aab0e1f9032c155913f5bd2d"} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.055378 5008 generic.go:334] "Generic (PLEG): container finished" podID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerID="07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc" exitCode=0 Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.055451 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerDied","Data":"07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc"} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.057602 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" event={"ID":"5ca041e2-baff-40ee-8fc9-e9bc58aee628","Type":"ContainerStarted","Data":"3a220e753ea80972106fad12775f162fefbbbe237c9a00a237aa821badcac191"} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.058654 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerStarted","Data":"add0ef656328b3411c8246a1cffa7e2baeefc91f711bf33d67c37a176e10eb38"} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.072404 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.104811 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.105187 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.605166122 +0000 UTC m=+165.278020359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.206518 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.206887 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.706874825 +0000 UTC m=+165.379729062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.308171 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.308398 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.808369433 +0000 UTC m=+165.481223710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.308600 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.309033 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.809014629 +0000 UTC m=+165.481868906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.410233 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.410425 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.910398973 +0000 UTC m=+165.583253210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.410597 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.410888 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.910876647 +0000 UTC m=+165.583730874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.441122 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:21 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:21 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:21 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.441185 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.457797 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kkc6c"] Jan 29 15:30:21 crc kubenswrapper[5008]: W0129 15:30:21.464840 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3716fd8_7f9b_44e2_ac3c_e907d8793dc9.slice/crio-f6ca26aae8c21f99e453dab95f84213192c172c0cca557c67f4aaaa7a2e1e57a WatchSource:0}: Error finding container f6ca26aae8c21f99e453dab95f84213192c172c0cca557c67f4aaaa7a2e1e57a: Status 404 returned error can't find the container with id f6ca26aae8c21f99e453dab95f84213192c172c0cca557c67f4aaaa7a2e1e57a Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.512437 5008 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T15:30:20.882622245Z","Handler":null,"Name":""} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.512991 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.513315 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:22.013297228 +0000 UTC m=+165.686151465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.520773 5008 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.520848 5008 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.614568 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.725164 5008 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.725453 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.919486 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.024026 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.032484 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.065299 5008 generic.go:334] "Generic (PLEG): container finished" podID="f56b5e44-f079-4c56-9e19-e09996979003" containerID="8a58e85619a9d68ab7ca1c73646da4750ac77969c5d738aeb0d3b0851d9dc82e" exitCode=0 Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.065356 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" event={"ID":"f56b5e44-f079-4c56-9e19-e09996979003","Type":"ContainerDied","Data":"8a58e85619a9d68ab7ca1c73646da4750ac77969c5d738aeb0d3b0851d9dc82e"} Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.066625 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" event={"ID":"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9","Type":"ContainerStarted","Data":"f6ca26aae8c21f99e453dab95f84213192c172c0cca557c67f4aaaa7a2e1e57a"} Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.069002 5008 generic.go:334] "Generic (PLEG): container finished" podID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerID="4b51ccd27d29592df8a7bede95816e1b7ee7978e1541458bdd34bb868c6e0912" exitCode=0 Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.069059 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerDied","Data":"4b51ccd27d29592df8a7bede95816e1b7ee7978e1541458bdd34bb868c6e0912"} Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.070483 5008 generic.go:334] "Generic (PLEG): container finished" podID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerID="01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5" exitCode=0 Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.070528 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerDied","Data":"01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5"} Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.072108 5008 generic.go:334] "Generic (PLEG): container finished" podID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerID="4c0c93394c1503334716279d33aab711196676ea784b3c3aa6166010a6b66a0e" exitCode=0 Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.073078 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" event={"ID":"7d5c80c8-4e74-4618-96c0-8e76168ad709","Type":"ContainerDied","Data":"4c0c93394c1503334716279d33aab711196676ea784b3c3aa6166010a6b66a0e"} Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.112150 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.233681 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.233710 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.233840 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw6k4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fd6nq_openshift-marketplace(37742fc9-fce4-41f0-ba04-7232b6e647a7): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.233887 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftbd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mkxw5_openshift-marketplace(6aef1830-577d-405c-bb54-6f9fe217ae86): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.235228 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fd6nq" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.235354 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-mkxw5" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.359002 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:30:22 crc kubenswrapper[5008]: W0129 15:30:22.363874 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30c54800_b443_4da8_9d41_22e8f156a1a1.slice/crio-59462ccb837299ee29a72d7df21357033cdf6b013812c469de4c5ef1edbad70d WatchSource:0}: Error finding container 59462ccb837299ee29a72d7df21357033cdf6b013812c469de4c5ef1edbad70d: Status 404 returned error can't find the container with id 59462ccb837299ee29a72d7df21357033cdf6b013812c469de4c5ef1edbad70d Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.443275 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:22 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:22 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:22 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.443338 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.037717 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.066085 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.066361 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.066378 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.066588 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.067093 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.076567 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.087107 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" event={"ID":"f56b5e44-f079-4c56-9e19-e09996979003","Type":"ContainerDied","Data":"283a3b198b8ebcea901bee24ad0194d994a822693f8e2f8f5e5b86077a5737c1"} Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.087178 5008 scope.go:117] "RemoveContainer" containerID="8a58e85619a9d68ab7ca1c73646da4750ac77969c5d738aeb0d3b0851d9dc82e" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.088120 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.090706 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" event={"ID":"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9","Type":"ContainerStarted","Data":"3973d52fa588768002d1f544c8d86d854d2542c7e734d160c088f88e6ba4e231"} Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.093308 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" event={"ID":"5ca041e2-baff-40ee-8fc9-e9bc58aee628","Type":"ContainerStarted","Data":"03dc70e8eaf7dffe3a41b4db12c793f0ddba7b43611b8a4e8388ec0d7320f21b"} Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.094168 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" event={"ID":"30c54800-b443-4da8-9d41-22e8f156a1a1","Type":"ContainerStarted","Data":"59462ccb837299ee29a72d7df21357033cdf6b013812c469de4c5ef1edbad70d"} Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.124610 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140278 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config\") pod \"f56b5e44-f079-4c56-9e19-e09996979003\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140342 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cdqj\" (UniqueName: \"kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj\") pod \"f56b5e44-f079-4c56-9e19-e09996979003\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140409 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert\") pod \"f56b5e44-f079-4c56-9e19-e09996979003\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140444 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca\") pod \"f56b5e44-f079-4c56-9e19-e09996979003\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140636 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dqd9\" (UniqueName: \"kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140700 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140811 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140849 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.147372 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config" (OuterVolumeSpecName: "config") pod "f56b5e44-f079-4c56-9e19-e09996979003" (UID: "f56b5e44-f079-4c56-9e19-e09996979003"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.147374 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca" (OuterVolumeSpecName: "client-ca") pod "f56b5e44-f079-4c56-9e19-e09996979003" (UID: "f56b5e44-f079-4c56-9e19-e09996979003"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.147918 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f56b5e44-f079-4c56-9e19-e09996979003" (UID: "f56b5e44-f079-4c56-9e19-e09996979003"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.153064 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj" (OuterVolumeSpecName: "kube-api-access-4cdqj") pod "f56b5e44-f079-4c56-9e19-e09996979003" (UID: "f56b5e44-f079-4c56-9e19-e09996979003"). InnerVolumeSpecName "kube-api-access-4cdqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.220612 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.220762 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-229kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tst9c_openshift-marketplace(ea8deba9-72cb-4274-add1-e80591a9e7cc): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.221998 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-tst9c" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.222215 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.222303 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pfbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lhtht_openshift-marketplace(a954daed-802a-4b46-81ef-7079dcddbaa5): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.223355 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-lhtht" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242379 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles\") pod \"7d5c80c8-4e74-4618-96c0-8e76168ad709\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242517 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqdxf\" (UniqueName: \"kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf\") pod \"7d5c80c8-4e74-4618-96c0-8e76168ad709\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242548 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config\") pod \"7d5c80c8-4e74-4618-96c0-8e76168ad709\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242572 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca\") pod \"7d5c80c8-4e74-4618-96c0-8e76168ad709\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242595 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert\") pod \"7d5c80c8-4e74-4618-96c0-8e76168ad709\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242836 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dqd9\" (UniqueName: \"kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242881 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242942 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242966 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243034 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243046 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cdqj\" (UniqueName: \"kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243060 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243073 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243207 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d5c80c8-4e74-4618-96c0-8e76168ad709" (UID: "7d5c80c8-4e74-4618-96c0-8e76168ad709"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243879 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d5c80c8-4e74-4618-96c0-8e76168ad709" (UID: "7d5c80c8-4e74-4618-96c0-8e76168ad709"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243971 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.244037 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.244310 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config" (OuterVolumeSpecName: "config") pod "7d5c80c8-4e74-4618-96c0-8e76168ad709" (UID: "7d5c80c8-4e74-4618-96c0-8e76168ad709"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.245977 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d5c80c8-4e74-4618-96c0-8e76168ad709" (UID: "7d5c80c8-4e74-4618-96c0-8e76168ad709"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.246375 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf" (OuterVolumeSpecName: "kube-api-access-dqdxf") pod "7d5c80c8-4e74-4618-96c0-8e76168ad709" (UID: "7d5c80c8-4e74-4618-96c0-8e76168ad709"). InnerVolumeSpecName "kube-api-access-dqdxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.249136 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.257145 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dqd9\" (UniqueName: \"kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.331327 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.344213 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqdxf\" (UniqueName: \"kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.344248 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.344260 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.344271 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.344283 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.423133 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.427728 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.431875 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.441585 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:23 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:23 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:23 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.441650 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.632588 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.107849 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" event={"ID":"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9","Type":"ContainerStarted","Data":"a1e1e230de516adb80a0bc23e6ccd4421ec96f5e899ddf60854c3cf44cd677da"} Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.109438 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" event={"ID":"30c54800-b443-4da8-9d41-22e8f156a1a1","Type":"ContainerStarted","Data":"30e2e1673271910cbbe5ac685fc8d9b9256d07c42ba932c22e18da6b153ba5d5"} Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.109616 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.111136 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" event={"ID":"7d5c80c8-4e74-4618-96c0-8e76168ad709","Type":"ContainerDied","Data":"877a7a5331b5add1273bcb856b0a6b558e22fc4ee16ab1f101067f85b3c64f92"} Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.111175 5008 scope.go:117] "RemoveContainer" containerID="4c0c93394c1503334716279d33aab711196676ea784b3c3aa6166010a6b66a0e" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.111302 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.116380 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" event={"ID":"afb7e8b5-ea3c-41ae-89da-ab5ec7171600","Type":"ContainerStarted","Data":"3b02507460795f19821a392cda839dd09d546d1c9003a8fa34c584311783c49f"} Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.116421 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" event={"ID":"afb7e8b5-ea3c-41ae-89da-ab5ec7171600","Type":"ContainerStarted","Data":"046a17c590098826d0a5eac7cd1935848d5c2b4be0940c2d3316db2e124ab690"} Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.116437 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.125816 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kkc6c" podStartSLOduration=146.125777617 podStartE2EDuration="2m26.125777617s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:24.123172109 +0000 UTC m=+167.796026346" watchObservedRunningTime="2026-01-29 15:30:24.125777617 +0000 UTC m=+167.798631874" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.143491 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.148939 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.159034 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" podStartSLOduration=146.159015597 podStartE2EDuration="2m26.159015597s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:24.158460132 +0000 UTC m=+167.831314379" watchObservedRunningTime="2026-01-29 15:30:24.159015597 +0000 UTC m=+167.831869844" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.179560 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" podStartSLOduration=27.179540054 podStartE2EDuration="27.179540054s" podCreationTimestamp="2026-01-29 15:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:24.175631392 +0000 UTC m=+167.848485649" watchObservedRunningTime="2026-01-29 15:30:24.179540054 +0000 UTC m=+167.852394291" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.191845 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" podStartSLOduration=4.191822706 podStartE2EDuration="4.191822706s" podCreationTimestamp="2026-01-29 15:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:24.191234811 +0000 UTC m=+167.864089068" watchObservedRunningTime="2026-01-29 15:30:24.191822706 +0000 UTC m=+167.864676963" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.220157 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.441132 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:24 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:24 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:24 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.441209 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.775205 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.780792 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:30:25 crc kubenswrapper[5008]: I0129 15:30:25.341491 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" path="/var/lib/kubelet/pods/7d5c80c8-4e74-4618-96c0-8e76168ad709/volumes" Jan 29 15:30:25 crc kubenswrapper[5008]: I0129 15:30:25.342730 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56b5e44-f079-4c56-9e19-e09996979003" path="/var/lib/kubelet/pods/f56b5e44-f079-4c56-9e19-e09996979003/volumes" Jan 29 15:30:25 crc kubenswrapper[5008]: I0129 15:30:25.441848 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:25 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:25 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:25 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:25 crc kubenswrapper[5008]: I0129 15:30:25.441945 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.000444 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:26 crc kubenswrapper[5008]: E0129 15:30:26.000725 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.000742 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.000881 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.001995 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.005692 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.005708 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.006261 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.006258 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.006420 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.007541 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.027407 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.038533 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.080256 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.080311 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.080331 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.080349 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.080416 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw9rs\" (UniqueName: \"kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.182049 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.182192 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.182240 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.182287 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.182349 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw9rs\" (UniqueName: \"kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.183690 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.183707 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.183953 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.190510 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.203727 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw9rs\" (UniqueName: \"kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.326496 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.451897 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:26 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:26 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:26 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.452311 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.733589 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.139213 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" event={"ID":"7dbbd108-38e5-44c9-a6a8-efaec064d3f0","Type":"ContainerStarted","Data":"a89ad9ebedb6a41ee71edf80b0a6e1658e17f7834cb3f34aa4f8d7ca83f8b7f5"} Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.139286 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" event={"ID":"7dbbd108-38e5-44c9-a6a8-efaec064d3f0","Type":"ContainerStarted","Data":"c89ec15a06edbf1f2377f00b216c0feab1fc55200bf490f013cd333af9148873"} Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.139876 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.142029 5008 patch_prober.go:28] interesting pod/controller-manager-d7649699d-6xx6r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.142105 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.165053 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" podStartSLOduration=7.165030899 podStartE2EDuration="7.165030899s" podCreationTimestamp="2026-01-29 15:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:27.160168422 +0000 UTC m=+170.833022669" watchObservedRunningTime="2026-01-29 15:30:27.165030899 +0000 UTC m=+170.837885126" Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.444682 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:27 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:27 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:27 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.444738 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:28 crc kubenswrapper[5008]: I0129 15:30:28.150733 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:28 crc kubenswrapper[5008]: I0129 15:30:28.440100 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:28 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:28 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:28 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:28 crc kubenswrapper[5008]: I0129 15:30:28.440198 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.124650 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.124720 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.124724 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.124824 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.124915 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.125590 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.125627 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.126041 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"b7c6360486afb3695d7f0cab5e94240be2d35122a76f5d2f164ac0cff78e316c"} pod="openshift-console/downloads-7954f5f757-6wmrp" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.126171 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" containerID="cri-o://b7c6360486afb3695d7f0cab5e94240be2d35122a76f5d2f164ac0cff78e316c" gracePeriod=2 Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.442632 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:29 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:29 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:29 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.443179 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.800245 5008 patch_prober.go:28] interesting pod/console-f9d7485db-g2rk6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.800336 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2rk6" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.157883 5008 generic.go:334] "Generic (PLEG): container finished" podID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerID="b7c6360486afb3695d7f0cab5e94240be2d35122a76f5d2f164ac0cff78e316c" exitCode=0 Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.157924 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6wmrp" event={"ID":"64cf2ff9-40f4-48a5-a16c-6513cf0470bd","Type":"ContainerDied","Data":"b7c6360486afb3695d7f0cab5e94240be2d35122a76f5d2f164ac0cff78e316c"} Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.157984 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6wmrp" event={"ID":"64cf2ff9-40f4-48a5-a16c-6513cf0470bd","Type":"ContainerStarted","Data":"04c64d72761a6b02c0284552d691c629d23e97b7073e08ff256271e0b02d6962"} Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.158524 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.158690 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.158734 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.442104 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:30 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:30 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:30 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.442175 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:31 crc kubenswrapper[5008]: I0129 15:30:31.165693 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:31 crc kubenswrapper[5008]: I0129 15:30:31.165751 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:31 crc kubenswrapper[5008]: I0129 15:30:31.440909 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:31 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:31 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:31 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:31 crc kubenswrapper[5008]: I0129 15:30:31.441184 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:32 crc kubenswrapper[5008]: I0129 15:30:32.442096 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:32 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:32 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:32 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:32 crc kubenswrapper[5008]: I0129 15:30:32.442200 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:33 crc kubenswrapper[5008]: I0129 15:30:33.440335 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:33 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:33 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:33 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:33 crc kubenswrapper[5008]: I0129 15:30:33.440400 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:34 crc kubenswrapper[5008]: I0129 15:30:34.446050 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:34 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:34 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:34 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:34 crc kubenswrapper[5008]: I0129 15:30:34.446126 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:35 crc kubenswrapper[5008]: I0129 15:30:35.457104 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:35 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:35 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:35 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:35 crc kubenswrapper[5008]: I0129 15:30:35.457498 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.441719 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:36 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:36 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:36 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.441932 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.844240 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.844558 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" containerID="cri-o://a89ad9ebedb6a41ee71edf80b0a6e1658e17f7834cb3f34aa4f8d7ca83f8b7f5" gracePeriod=30 Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.866789 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.867004 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerName="route-controller-manager" containerID="cri-o://3b02507460795f19821a392cda839dd09d546d1c9003a8fa34c584311783c49f" gracePeriod=30 Jan 29 15:30:37 crc kubenswrapper[5008]: I0129 15:30:37.441692 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:37 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:37 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:37 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:37 crc kubenswrapper[5008]: I0129 15:30:37.441747 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.231173 5008 generic.go:334] "Generic (PLEG): container finished" podID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerID="3b02507460795f19821a392cda839dd09d546d1c9003a8fa34c584311783c49f" exitCode=0 Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.231262 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" event={"ID":"afb7e8b5-ea3c-41ae-89da-ab5ec7171600","Type":"ContainerDied","Data":"3b02507460795f19821a392cda839dd09d546d1c9003a8fa34c584311783c49f"} Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.233927 5008 generic.go:334] "Generic (PLEG): container finished" podID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerID="a89ad9ebedb6a41ee71edf80b0a6e1658e17f7834cb3f34aa4f8d7ca83f8b7f5" exitCode=0 Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.233976 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" event={"ID":"7dbbd108-38e5-44c9-a6a8-efaec064d3f0","Type":"ContainerDied","Data":"a89ad9ebedb6a41ee71edf80b0a6e1658e17f7834cb3f34aa4f8d7ca83f8b7f5"} Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.441792 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:38 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:38 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:38 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.441854 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.125315 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.125723 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.126070 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.126133 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.442333 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.446221 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.799655 5008 patch_prober.go:28] interesting pod/console-f9d7485db-g2rk6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.799871 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2rk6" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 15:30:40 crc kubenswrapper[5008]: I0129 15:30:40.935548 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:42 crc kubenswrapper[5008]: I0129 15:30:42.119626 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:43 crc kubenswrapper[5008]: I0129 15:30:43.990405 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:30:43 crc kubenswrapper[5008]: I0129 15:30:43.990521 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:30:44 crc kubenswrapper[5008]: I0129 15:30:44.424988 5008 patch_prober.go:28] interesting pod/route-controller-manager-64b449df99-q9t46 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 15:30:44 crc kubenswrapper[5008]: I0129 15:30:44.425140 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:30:46 crc kubenswrapper[5008]: I0129 15:30:46.783748 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.327201 5008 patch_prober.go:28] interesting pod/controller-manager-d7649699d-6xx6r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.327273 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.928177 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.943056 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.986452 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:30:47 crc kubenswrapper[5008]: E0129 15:30:47.986836 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerName="route-controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.986854 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerName="route-controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: E0129 15:30:47.986893 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.986899 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.987011 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.987048 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerName="route-controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.987453 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.988720 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.988823 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.989009 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.991509 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090172 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw9rs\" (UniqueName: \"kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs\") pod \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090256 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert\") pod \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090293 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca\") pod \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090359 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert\") pod \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090386 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dqd9\" (UniqueName: \"kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9\") pod \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090413 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca\") pod \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090469 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config\") pod \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090489 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config\") pod \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090517 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles\") pod \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090738 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090799 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090876 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5jnw\" (UniqueName: \"kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090910 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.091576 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca" (OuterVolumeSpecName: "client-ca") pod "afb7e8b5-ea3c-41ae-89da-ab5ec7171600" (UID: "afb7e8b5-ea3c-41ae-89da-ab5ec7171600"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.091969 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7dbbd108-38e5-44c9-a6a8-efaec064d3f0" (UID: "7dbbd108-38e5-44c9-a6a8-efaec064d3f0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.092148 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca" (OuterVolumeSpecName: "client-ca") pod "7dbbd108-38e5-44c9-a6a8-efaec064d3f0" (UID: "7dbbd108-38e5-44c9-a6a8-efaec064d3f0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.092294 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.092346 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config" (OuterVolumeSpecName: "config") pod "afb7e8b5-ea3c-41ae-89da-ab5ec7171600" (UID: "afb7e8b5-ea3c-41ae-89da-ab5ec7171600"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.092350 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config" (OuterVolumeSpecName: "config") pod "7dbbd108-38e5-44c9-a6a8-efaec064d3f0" (UID: "7dbbd108-38e5-44c9-a6a8-efaec064d3f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.092766 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.096906 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9" (OuterVolumeSpecName: "kube-api-access-7dqd9") pod "afb7e8b5-ea3c-41ae-89da-ab5ec7171600" (UID: "afb7e8b5-ea3c-41ae-89da-ab5ec7171600"). InnerVolumeSpecName "kube-api-access-7dqd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.097005 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "afb7e8b5-ea3c-41ae-89da-ab5ec7171600" (UID: "afb7e8b5-ea3c-41ae-89da-ab5ec7171600"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.099904 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7dbbd108-38e5-44c9-a6a8-efaec064d3f0" (UID: "7dbbd108-38e5-44c9-a6a8-efaec064d3f0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.112139 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs" (OuterVolumeSpecName: "kube-api-access-cw9rs") pod "7dbbd108-38e5-44c9-a6a8-efaec064d3f0" (UID: "7dbbd108-38e5-44c9-a6a8-efaec064d3f0"). InnerVolumeSpecName "kube-api-access-cw9rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.124574 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.191905 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5jnw\" (UniqueName: \"kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192013 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192031 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw9rs\" (UniqueName: \"kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192045 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192059 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192070 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192082 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dqd9\" (UniqueName: \"kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192093 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192102 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192113 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.215664 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5jnw\" (UniqueName: \"kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.290738 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" event={"ID":"afb7e8b5-ea3c-41ae-89da-ab5ec7171600","Type":"ContainerDied","Data":"046a17c590098826d0a5eac7cd1935848d5c2b4be0940c2d3316db2e124ab690"} Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.290834 5008 scope.go:117] "RemoveContainer" containerID="3b02507460795f19821a392cda839dd09d546d1c9003a8fa34c584311783c49f" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.290855 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.293917 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" event={"ID":"7dbbd108-38e5-44c9-a6a8-efaec064d3f0","Type":"ContainerDied","Data":"c89ec15a06edbf1f2377f00b216c0feab1fc55200bf490f013cd333af9148873"} Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.294018 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.313938 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.324546 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.332807 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.337210 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.341238 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:49 crc kubenswrapper[5008]: I0129 15:30:49.134860 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:30:49 crc kubenswrapper[5008]: I0129 15:30:49.334328 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" path="/var/lib/kubelet/pods/7dbbd108-38e5-44c9-a6a8-efaec064d3f0/volumes" Jan 29 15:30:49 crc kubenswrapper[5008]: I0129 15:30:49.335240 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" path="/var/lib/kubelet/pods/afb7e8b5-ea3c-41ae-89da-ab5ec7171600/volumes" Jan 29 15:30:49 crc kubenswrapper[5008]: I0129 15:30:49.805488 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:30:49 crc kubenswrapper[5008]: I0129 15:30:49.810419 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.021730 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.025557 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.028164 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.028187 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.028396 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.028868 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.031371 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.039628 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.039871 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.046939 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.126509 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.126591 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.126632 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjbjx\" (UniqueName: \"kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.126650 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.126683 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.229839 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.229910 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjbjx\" (UniqueName: \"kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.229940 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.229978 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.230013 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.231147 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.231441 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.237298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.250232 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjbjx\" (UniqueName: \"kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.258954 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.347921 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.309614 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.311435 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.315648 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.316150 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.319913 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.457220 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.457305 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.558395 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.558492 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.558613 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.585276 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.635825 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:56 crc kubenswrapper[5008]: I0129 15:30:56.830013 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:30:56 crc kubenswrapper[5008]: I0129 15:30:56.926244 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.526645 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.527914 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.530499 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.726759 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.726975 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.727095 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.828142 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.828208 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.828264 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.828294 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.828360 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.856361 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:58 crc kubenswrapper[5008]: I0129 15:30:58.155621 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:13 crc kubenswrapper[5008]: I0129 15:31:13.990367 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:31:13 crc kubenswrapper[5008]: I0129 15:31:13.991159 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:31:13 crc kubenswrapper[5008]: I0129 15:31:13.991225 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:31:13 crc kubenswrapper[5008]: I0129 15:31:13.992167 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:31:13 crc kubenswrapper[5008]: I0129 15:31:13.992285 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731" gracePeriod=600 Jan 29 15:31:19 crc kubenswrapper[5008]: I0129 15:31:19.477998 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731" exitCode=0 Jan 29 15:31:19 crc kubenswrapper[5008]: I0129 15:31:19.478138 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731"} Jan 29 15:31:19 crc kubenswrapper[5008]: I0129 15:31:19.839248 5008 scope.go:117] "RemoveContainer" containerID="a89ad9ebedb6a41ee71edf80b0a6e1658e17f7834cb3f34aa4f8d7ca83f8b7f5" Jan 29 15:31:27 crc kubenswrapper[5008]: E0129 15:31:27.522717 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:31:27 crc kubenswrapper[5008]: E0129 15:31:27.523321 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8q2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4dwdf_openshift-marketplace(d2d42845-cca1-4b60-bc84-4b2baebf702b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:27 crc kubenswrapper[5008]: E0129 15:31:27.524480 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4dwdf" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" Jan 29 15:31:28 crc kubenswrapper[5008]: E0129 15:31:28.058774 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:31:28 crc kubenswrapper[5008]: E0129 15:31:28.059056 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btkm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h7vmc_openshift-marketplace(9bcecb83-1aec-4bd4-9b46-f02deb628018): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:28 crc kubenswrapper[5008]: E0129 15:31:28.060337 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h7vmc" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.319087 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.320023 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pfbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lhtht_openshift-marketplace(a954daed-802a-4b46-81ef-7079dcddbaa5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.321276 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lhtht" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.323980 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.324081 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-229kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tst9c_openshift-marketplace(ea8deba9-72cb-4274-add1-e80591a9e7cc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.325361 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tst9c" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.641377 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4dwdf" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.661494 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.661704 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dldqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cwgw5_openshift-marketplace(6aebe040-289b-48c1-a825-f12b471a5ad6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.663081 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cwgw5" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.697688 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.698026 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5sl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z9t2h_openshift-marketplace(250e7db8-88dd-44fd-8d73-51a6f8f4ba96): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.699214 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-z9t2h" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.900082 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h7vmc" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.907946 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.908071 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftbd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mkxw5_openshift-marketplace(6aef1830-577d-405c-bb54-6f9fe217ae86): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.909262 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mkxw5" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.917762 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.917908 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw6k4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fd6nq_openshift-marketplace(37742fc9-fce4-41f0-ba04-7232b6e647a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.919095 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fd6nq" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.341669 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:31:41 crc kubenswrapper[5008]: W0129 15:31:41.348431 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f3f8688_c937_4724_83ec_494dcce5177d.slice/crio-8377a8ad0934799e196e9abb9f60b501a8ac0a2ca3e736013d5254ba54abd663 WatchSource:0}: Error finding container 8377a8ad0934799e196e9abb9f60b501a8ac0a2ca3e736013d5254ba54abd663: Status 404 returned error can't find the container with id 8377a8ad0934799e196e9abb9f60b501a8ac0a2ca3e736013d5254ba54abd663 Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.393514 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.398317 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:31:41 crc kubenswrapper[5008]: W0129 15:31:41.402696 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaf4b11bc_2d2f_4e68_ab59_cbc08fecba52.slice/crio-3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753 WatchSource:0}: Error finding container 3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753: Status 404 returned error can't find the container with id 3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753 Jan 29 15:31:41 crc kubenswrapper[5008]: W0129 15:31:41.406316 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb46f1f12_a290_441c_a3bb_4584cc2a3102.slice/crio-9721da6a6b936d29937431fe10eb863eff5114e4271b46c477b1083f5c955934 WatchSource:0}: Error finding container 9721da6a6b936d29937431fe10eb863eff5114e4271b46c477b1083f5c955934: Status 404 returned error can't find the container with id 9721da6a6b936d29937431fe10eb863eff5114e4271b46c477b1083f5c955934 Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.410753 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:31:41 crc kubenswrapper[5008]: W0129 15:31:41.431097 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod70797d1b_2554_4595_aaed_29539196bbd1.slice/crio-c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e WatchSource:0}: Error finding container c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e: Status 404 returned error can't find the container with id c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.627219 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" event={"ID":"b46f1f12-a290-441c-a3bb-4584cc2a3102","Type":"ContainerStarted","Data":"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.627262 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" event={"ID":"b46f1f12-a290-441c-a3bb-4584cc2a3102","Type":"ContainerStarted","Data":"9721da6a6b936d29937431fe10eb863eff5114e4271b46c477b1083f5c955934"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.627355 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerName="route-controller-manager" containerID="cri-o://23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad" gracePeriod=30 Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.627467 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.631731 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.632946 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af4b11bc-2d2f-4e68-ab59-cbc08fecba52","Type":"ContainerStarted","Data":"3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.634848 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"70797d1b-2554-4595-aaed-29539196bbd1","Type":"ContainerStarted","Data":"c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.635996 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" event={"ID":"2f3f8688-c937-4724-83ec-494dcce5177d","Type":"ContainerStarted","Data":"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.636030 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" event={"ID":"2f3f8688-c937-4724-83ec-494dcce5177d","Type":"ContainerStarted","Data":"8377a8ad0934799e196e9abb9f60b501a8ac0a2ca3e736013d5254ba54abd663"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.636112 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" podUID="2f3f8688-c937-4724-83ec-494dcce5177d" containerName="controller-manager" containerID="cri-o://8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26" gracePeriod=30 Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.636264 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.640744 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.644073 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" podStartSLOduration=65.644058141 podStartE2EDuration="1m5.644058141s" podCreationTimestamp="2026-01-29 15:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:41.641588679 +0000 UTC m=+245.314442916" watchObservedRunningTime="2026-01-29 15:31:41.644058141 +0000 UTC m=+245.316912378" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.685234 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" podStartSLOduration=65.68521153 podStartE2EDuration="1m5.68521153s" podCreationTimestamp="2026-01-29 15:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:41.666119821 +0000 UTC m=+245.338974058" watchObservedRunningTime="2026-01-29 15:31:41.68521153 +0000 UTC m=+245.358065777" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.992285 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.016014 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-585448bccb-4m9fq"] Jan 29 15:31:42 crc kubenswrapper[5008]: E0129 15:31:42.016230 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3f8688-c937-4724-83ec-494dcce5177d" containerName="controller-manager" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.016242 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3f8688-c937-4724-83ec-494dcce5177d" containerName="controller-manager" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.016345 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3f8688-c937-4724-83ec-494dcce5177d" containerName="controller-manager" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.016662 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.034598 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585448bccb-4m9fq"] Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.109632 5008 patch_prober.go:28] interesting pod/route-controller-manager-65dbd47846-qgvzb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:36014->10.217.0.58:8443: read: connection reset by peer" start-of-body= Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.109701 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:36014->10.217.0.58:8443: read: connection reset by peer" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.113192 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config\") pod \"2f3f8688-c937-4724-83ec-494dcce5177d\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.113289 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles\") pod \"2f3f8688-c937-4724-83ec-494dcce5177d\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.113330 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjbjx\" (UniqueName: \"kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx\") pod \"2f3f8688-c937-4724-83ec-494dcce5177d\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.113358 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert\") pod \"2f3f8688-c937-4724-83ec-494dcce5177d\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.113442 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca\") pod \"2f3f8688-c937-4724-83ec-494dcce5177d\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.114074 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2f3f8688-c937-4724-83ec-494dcce5177d" (UID: "2f3f8688-c937-4724-83ec-494dcce5177d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.114096 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca" (OuterVolumeSpecName: "client-ca") pod "2f3f8688-c937-4724-83ec-494dcce5177d" (UID: "2f3f8688-c937-4724-83ec-494dcce5177d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.114149 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config" (OuterVolumeSpecName: "config") pod "2f3f8688-c937-4724-83ec-494dcce5177d" (UID: "2f3f8688-c937-4724-83ec-494dcce5177d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.118945 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx" (OuterVolumeSpecName: "kube-api-access-xjbjx") pod "2f3f8688-c937-4724-83ec-494dcce5177d" (UID: "2f3f8688-c937-4724-83ec-494dcce5177d"). InnerVolumeSpecName "kube-api-access-xjbjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.119919 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2f3f8688-c937-4724-83ec-494dcce5177d" (UID: "2f3f8688-c937-4724-83ec-494dcce5177d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215007 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215172 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215232 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2l6\" (UniqueName: \"kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215306 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215362 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215468 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjbjx\" (UniqueName: \"kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215495 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215509 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215520 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215532 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.317280 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.317337 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.317421 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.317455 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.317474 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st2l6\" (UniqueName: \"kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.319162 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.319418 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.319684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.322430 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.338323 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st2l6\" (UniqueName: \"kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.343193 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-65dbd47846-qgvzb_b46f1f12-a290-441c-a3bb-4584cc2a3102/route-controller-manager/0.log" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.343249 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.519467 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5jnw\" (UniqueName: \"kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw\") pod \"b46f1f12-a290-441c-a3bb-4584cc2a3102\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.519536 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config\") pod \"b46f1f12-a290-441c-a3bb-4584cc2a3102\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.519647 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert\") pod \"b46f1f12-a290-441c-a3bb-4584cc2a3102\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.519684 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca\") pod \"b46f1f12-a290-441c-a3bb-4584cc2a3102\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.520548 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config" (OuterVolumeSpecName: "config") pod "b46f1f12-a290-441c-a3bb-4584cc2a3102" (UID: "b46f1f12-a290-441c-a3bb-4584cc2a3102"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.520750 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca" (OuterVolumeSpecName: "client-ca") pod "b46f1f12-a290-441c-a3bb-4584cc2a3102" (UID: "b46f1f12-a290-441c-a3bb-4584cc2a3102"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.524406 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b46f1f12-a290-441c-a3bb-4584cc2a3102" (UID: "b46f1f12-a290-441c-a3bb-4584cc2a3102"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.524662 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw" (OuterVolumeSpecName: "kube-api-access-f5jnw") pod "b46f1f12-a290-441c-a3bb-4584cc2a3102" (UID: "b46f1f12-a290-441c-a3bb-4584cc2a3102"). InnerVolumeSpecName "kube-api-access-f5jnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.621076 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.621125 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.621139 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5jnw\" (UniqueName: \"kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.621152 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.637443 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643225 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-65dbd47846-qgvzb_b46f1f12-a290-441c-a3bb-4584cc2a3102/route-controller-manager/0.log" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643273 5008 generic.go:334] "Generic (PLEG): container finished" podID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerID="23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad" exitCode=255 Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643329 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" event={"ID":"b46f1f12-a290-441c-a3bb-4584cc2a3102","Type":"ContainerDied","Data":"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643341 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643354 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" event={"ID":"b46f1f12-a290-441c-a3bb-4584cc2a3102","Type":"ContainerDied","Data":"9721da6a6b936d29937431fe10eb863eff5114e4271b46c477b1083f5c955934"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643370 5008 scope.go:117] "RemoveContainer" containerID="23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.645294 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af4b11bc-2d2f-4e68-ab59-cbc08fecba52","Type":"ContainerStarted","Data":"7c398ab151812dfd065f5ce688e5a1aab9c54766a8265004ad57f01a071e1896"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.648275 5008 generic.go:334] "Generic (PLEG): container finished" podID="70797d1b-2554-4595-aaed-29539196bbd1" containerID="676b828ba9c9e8717218ec7b830b98e6f483d763f70089b6eac56428e7248a03" exitCode=0 Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.648448 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"70797d1b-2554-4595-aaed-29539196bbd1","Type":"ContainerDied","Data":"676b828ba9c9e8717218ec7b830b98e6f483d763f70089b6eac56428e7248a03"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.650185 5008 generic.go:334] "Generic (PLEG): container finished" podID="2f3f8688-c937-4724-83ec-494dcce5177d" containerID="8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26" exitCode=0 Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.650296 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" event={"ID":"2f3f8688-c937-4724-83ec-494dcce5177d","Type":"ContainerDied","Data":"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.650324 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" event={"ID":"2f3f8688-c937-4724-83ec-494dcce5177d","Type":"ContainerDied","Data":"8377a8ad0934799e196e9abb9f60b501a8ac0a2ca3e736013d5254ba54abd663"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.650516 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.658670 5008 scope.go:117] "RemoveContainer" containerID="23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad" Jan 29 15:31:42 crc kubenswrapper[5008]: E0129 15:31:42.659156 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad\": container with ID starting with 23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad not found: ID does not exist" containerID="23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.659195 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad"} err="failed to get container status \"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad\": rpc error: code = NotFound desc = could not find container \"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad\": container with ID starting with 23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad not found: ID does not exist" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.659225 5008 scope.go:117] "RemoveContainer" containerID="8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.663980 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=45.66396284 podStartE2EDuration="45.66396284s" podCreationTimestamp="2026-01-29 15:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:42.66300885 +0000 UTC m=+246.335863107" watchObservedRunningTime="2026-01-29 15:31:42.66396284 +0000 UTC m=+246.336817077" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.679644 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.679743 5008 scope.go:117] "RemoveContainer" containerID="8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26" Jan 29 15:31:42 crc kubenswrapper[5008]: E0129 15:31:42.680154 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26\": container with ID starting with 8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26 not found: ID does not exist" containerID="8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.680192 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26"} err="failed to get container status \"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26\": rpc error: code = NotFound desc = could not find container \"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26\": container with ID starting with 8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26 not found: ID does not exist" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.682439 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.706975 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.709305 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.838480 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585448bccb-4m9fq"] Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.330459 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3f8688-c937-4724-83ec-494dcce5177d" path="/var/lib/kubelet/pods/2f3f8688-c937-4724-83ec-494dcce5177d/volumes" Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.331171 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" path="/var/lib/kubelet/pods/b46f1f12-a290-441c-a3bb-4584cc2a3102/volumes" Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.657458 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" event={"ID":"64612440-e59b-46bb-a60f-f10989166e58","Type":"ContainerStarted","Data":"40321afd189e235fc1bb78923d74cb98e8fe85b88b55f9bd3844976bd07eb0f5"} Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.658624 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" event={"ID":"64612440-e59b-46bb-a60f-f10989166e58","Type":"ContainerStarted","Data":"cbb9854cfe9f99d27e1796a8bf85e10b2281996e9b1dad79a2b1e102f79ba6c3"} Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.680697 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" podStartSLOduration=47.680674894 podStartE2EDuration="47.680674894s" podCreationTimestamp="2026-01-29 15:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:43.680131023 +0000 UTC m=+247.352985310" watchObservedRunningTime="2026-01-29 15:31:43.680674894 +0000 UTC m=+247.353529161" Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.874544 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.036589 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir\") pod \"70797d1b-2554-4595-aaed-29539196bbd1\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.036645 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access\") pod \"70797d1b-2554-4595-aaed-29539196bbd1\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.036767 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "70797d1b-2554-4595-aaed-29539196bbd1" (UID: "70797d1b-2554-4595-aaed-29539196bbd1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.037330 5008 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.051345 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4"] Jan 29 15:31:44 crc kubenswrapper[5008]: E0129 15:31:44.051560 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerName="route-controller-manager" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.051573 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerName="route-controller-manager" Jan 29 15:31:44 crc kubenswrapper[5008]: E0129 15:31:44.051586 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70797d1b-2554-4595-aaed-29539196bbd1" containerName="pruner" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.051592 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="70797d1b-2554-4595-aaed-29539196bbd1" containerName="pruner" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.051697 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerName="route-controller-manager" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.051713 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="70797d1b-2554-4595-aaed-29539196bbd1" containerName="pruner" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.052339 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.052657 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "70797d1b-2554-4595-aaed-29539196bbd1" (UID: "70797d1b-2554-4595-aaed-29539196bbd1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.054195 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.054707 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.055555 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.055564 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.055977 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.055724 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.068660 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4"] Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.138690 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.239753 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw5nn\" (UniqueName: \"kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.240207 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.240247 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.240321 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.342130 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw5nn\" (UniqueName: \"kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.342248 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.342303 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.342424 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.345013 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.345502 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.354918 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.364684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw5nn\" (UniqueName: \"kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.388459 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.665207 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"70797d1b-2554-4595-aaed-29539196bbd1","Type":"ContainerDied","Data":"c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e"} Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.665258 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.665226 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.665420 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.672867 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.835969 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4"] Jan 29 15:31:45 crc kubenswrapper[5008]: I0129 15:31:45.671461 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" event={"ID":"bf35ff68-68b3-4743-803f-e451a5f5c5bd","Type":"ContainerStarted","Data":"dbb82c43ba7943df2747aa78a2127da4c2cba3ad40144842a2f920c5e71f8479"} Jan 29 15:31:45 crc kubenswrapper[5008]: I0129 15:31:45.671747 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" event={"ID":"bf35ff68-68b3-4743-803f-e451a5f5c5bd","Type":"ContainerStarted","Data":"151a001a83e99402752792ff1d9b03e857965ca404f04dce980c55396aacc517"} Jan 29 15:31:45 crc kubenswrapper[5008]: I0129 15:31:45.672036 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:45 crc kubenswrapper[5008]: I0129 15:31:45.677448 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:45 crc kubenswrapper[5008]: I0129 15:31:45.691063 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" podStartSLOduration=49.69104716 podStartE2EDuration="49.69104716s" podCreationTimestamp="2026-01-29 15:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:45.685897102 +0000 UTC m=+249.358751339" watchObservedRunningTime="2026-01-29 15:31:45.69104716 +0000 UTC m=+249.363901397" Jan 29 15:31:48 crc kubenswrapper[5008]: E0129 15:31:48.326393 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lhtht" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" Jan 29 15:31:50 crc kubenswrapper[5008]: E0129 15:31:50.326258 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tst9c" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" Jan 29 15:31:52 crc kubenswrapper[5008]: I0129 15:31:52.726589 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerStarted","Data":"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76"} Jan 29 15:31:53 crc kubenswrapper[5008]: I0129 15:31:53.746194 5008 generic.go:334] "Generic (PLEG): container finished" podID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerID="b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76" exitCode=0 Jan 29 15:31:53 crc kubenswrapper[5008]: I0129 15:31:53.746264 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerDied","Data":"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76"} Jan 29 15:31:54 crc kubenswrapper[5008]: E0129 15:31:54.325559 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z9t2h" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" Jan 29 15:31:54 crc kubenswrapper[5008]: E0129 15:31:54.325866 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cwgw5" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" Jan 29 15:31:54 crc kubenswrapper[5008]: I0129 15:31:54.752576 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerStarted","Data":"5ef6720d337e6b7bdd09776b3452601c072f482c35a5a9e55c34041df49ba20b"} Jan 29 15:31:55 crc kubenswrapper[5008]: E0129 15:31:55.325721 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fd6nq" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" Jan 29 15:31:55 crc kubenswrapper[5008]: I0129 15:31:55.761330 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerStarted","Data":"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96"} Jan 29 15:31:55 crc kubenswrapper[5008]: I0129 15:31:55.763524 5008 generic.go:334] "Generic (PLEG): container finished" podID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerID="5ef6720d337e6b7bdd09776b3452601c072f482c35a5a9e55c34041df49ba20b" exitCode=0 Jan 29 15:31:55 crc kubenswrapper[5008]: I0129 15:31:55.763554 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerDied","Data":"5ef6720d337e6b7bdd09776b3452601c072f482c35a5a9e55c34041df49ba20b"} Jan 29 15:31:56 crc kubenswrapper[5008]: E0129 15:31:56.373592 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mkxw5" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" Jan 29 15:31:56 crc kubenswrapper[5008]: I0129 15:31:56.774719 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerStarted","Data":"f602032356e6af24b6539dc335606faed034c76d076edd55de00a1f6423d0579"} Jan 29 15:31:56 crc kubenswrapper[5008]: I0129 15:31:56.798491 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h7vmc" podStartSLOduration=5.969454205 podStartE2EDuration="1m40.79847478s" podCreationTimestamp="2026-01-29 15:30:16 +0000 UTC" firstStartedPulling="2026-01-29 15:30:20.05662067 +0000 UTC m=+163.729474907" lastFinishedPulling="2026-01-29 15:31:54.885641245 +0000 UTC m=+258.558495482" observedRunningTime="2026-01-29 15:31:56.795225438 +0000 UTC m=+260.468079735" watchObservedRunningTime="2026-01-29 15:31:56.79847478 +0000 UTC m=+260.471329017" Jan 29 15:31:56 crc kubenswrapper[5008]: I0129 15:31:56.818374 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4dwdf" podStartSLOduration=3.433740065 podStartE2EDuration="1m40.818358426s" podCreationTimestamp="2026-01-29 15:30:16 +0000 UTC" firstStartedPulling="2026-01-29 15:30:19.013641734 +0000 UTC m=+162.686495971" lastFinishedPulling="2026-01-29 15:31:56.398260095 +0000 UTC m=+260.071114332" observedRunningTime="2026-01-29 15:31:56.815226766 +0000 UTC m=+260.488081043" watchObservedRunningTime="2026-01-29 15:31:56.818358426 +0000 UTC m=+260.491212663" Jan 29 15:31:57 crc kubenswrapper[5008]: I0129 15:31:57.150104 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:31:57 crc kubenswrapper[5008]: I0129 15:31:57.151651 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:31:58 crc kubenswrapper[5008]: I0129 15:31:58.401013 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-h7vmc" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="registry-server" probeResult="failure" output=< Jan 29 15:31:58 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:31:58 crc kubenswrapper[5008]: > Jan 29 15:32:02 crc kubenswrapper[5008]: I0129 15:32:02.804713 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerStarted","Data":"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277"} Jan 29 15:32:03 crc kubenswrapper[5008]: I0129 15:32:03.811247 5008 generic.go:334] "Generic (PLEG): container finished" podID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerID="3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277" exitCode=0 Jan 29 15:32:03 crc kubenswrapper[5008]: I0129 15:32:03.811300 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerDied","Data":"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277"} Jan 29 15:32:06 crc kubenswrapper[5008]: I0129 15:32:06.771601 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:32:06 crc kubenswrapper[5008]: I0129 15:32:06.771981 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:32:06 crc kubenswrapper[5008]: I0129 15:32:06.971351 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:32:07 crc kubenswrapper[5008]: I0129 15:32:07.005768 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:32:07 crc kubenswrapper[5008]: I0129 15:32:07.228088 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:32:07 crc kubenswrapper[5008]: I0129 15:32:07.262878 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:32:07 crc kubenswrapper[5008]: I0129 15:32:07.837577 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerStarted","Data":"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee"} Jan 29 15:32:07 crc kubenswrapper[5008]: I0129 15:32:07.866271 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lhtht" podStartSLOduration=4.616620277 podStartE2EDuration="1m48.866244207s" podCreationTimestamp="2026-01-29 15:30:19 +0000 UTC" firstStartedPulling="2026-01-29 15:30:23.095532993 +0000 UTC m=+166.768387230" lastFinishedPulling="2026-01-29 15:32:07.345156923 +0000 UTC m=+271.018011160" observedRunningTime="2026-01-29 15:32:07.855564447 +0000 UTC m=+271.528418694" watchObservedRunningTime="2026-01-29 15:32:07.866244207 +0000 UTC m=+271.539098474" Jan 29 15:32:08 crc kubenswrapper[5008]: I0129 15:32:08.398358 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:32:08 crc kubenswrapper[5008]: I0129 15:32:08.844770 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerStarted","Data":"c66762f5da3eb3376b4ceceb433da1a00c15c72c9c525f47d7d7528bad62fea4"} Jan 29 15:32:08 crc kubenswrapper[5008]: I0129 15:32:08.844959 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h7vmc" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="registry-server" containerID="cri-o://c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96" gracePeriod=2 Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.254134 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.392662 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities\") pod \"9bcecb83-1aec-4bd4-9b46-f02deb628018\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.392985 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content\") pod \"9bcecb83-1aec-4bd4-9b46-f02deb628018\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.393057 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btkm4\" (UniqueName: \"kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4\") pod \"9bcecb83-1aec-4bd4-9b46-f02deb628018\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.405716 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities" (OuterVolumeSpecName: "utilities") pod "9bcecb83-1aec-4bd4-9b46-f02deb628018" (UID: "9bcecb83-1aec-4bd4-9b46-f02deb628018"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.406155 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4" (OuterVolumeSpecName: "kube-api-access-btkm4") pod "9bcecb83-1aec-4bd4-9b46-f02deb628018" (UID: "9bcecb83-1aec-4bd4-9b46-f02deb628018"). InnerVolumeSpecName "kube-api-access-btkm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.444311 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bcecb83-1aec-4bd4-9b46-f02deb628018" (UID: "9bcecb83-1aec-4bd4-9b46-f02deb628018"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.494676 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.494715 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btkm4\" (UniqueName: \"kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.494733 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.851546 5008 generic.go:334] "Generic (PLEG): container finished" podID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerID="c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96" exitCode=0 Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.851626 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.851900 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerDied","Data":"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96"} Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.852124 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerDied","Data":"af3e1a3fc6fe6b714e3700dd86c4612e0716f599f6f3f8cae393165561ce5bfe"} Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.852170 5008 scope.go:117] "RemoveContainer" containerID="c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.860015 5008 generic.go:334] "Generic (PLEG): container finished" podID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerID="c66762f5da3eb3376b4ceceb433da1a00c15c72c9c525f47d7d7528bad62fea4" exitCode=0 Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.860089 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerDied","Data":"c66762f5da3eb3376b4ceceb433da1a00c15c72c9c525f47d7d7528bad62fea4"} Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.865880 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerID="b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869" exitCode=0 Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.865918 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerDied","Data":"b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869"} Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.872455 5008 scope.go:117] "RemoveContainer" containerID="b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.918839 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.919290 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.920371 5008 scope.go:117] "RemoveContainer" containerID="2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.941634 5008 scope.go:117] "RemoveContainer" containerID="c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96" Jan 29 15:32:09 crc kubenswrapper[5008]: E0129 15:32:09.942183 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96\": container with ID starting with c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96 not found: ID does not exist" containerID="c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.942223 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96"} err="failed to get container status \"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96\": rpc error: code = NotFound desc = could not find container \"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96\": container with ID starting with c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96 not found: ID does not exist" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.942249 5008 scope.go:117] "RemoveContainer" containerID="b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76" Jan 29 15:32:09 crc kubenswrapper[5008]: E0129 15:32:09.943129 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76\": container with ID starting with b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76 not found: ID does not exist" containerID="b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.943155 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76"} err="failed to get container status \"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76\": rpc error: code = NotFound desc = could not find container \"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76\": container with ID starting with b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76 not found: ID does not exist" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.943172 5008 scope.go:117] "RemoveContainer" containerID="2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f" Jan 29 15:32:09 crc kubenswrapper[5008]: E0129 15:32:09.943695 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f\": container with ID starting with 2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f not found: ID does not exist" containerID="2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.943730 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f"} err="failed to get container status \"2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f\": rpc error: code = NotFound desc = could not find container \"2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f\": container with ID starting with 2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f not found: ID does not exist" Jan 29 15:32:10 crc kubenswrapper[5008]: I0129 15:32:10.190655 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:32:10 crc kubenswrapper[5008]: I0129 15:32:10.190693 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:32:11 crc kubenswrapper[5008]: I0129 15:32:11.236411 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhtht" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="registry-server" probeResult="failure" output=< Jan 29 15:32:11 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:32:11 crc kubenswrapper[5008]: > Jan 29 15:32:11 crc kubenswrapper[5008]: I0129 15:32:11.332444 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" path="/var/lib/kubelet/pods/9bcecb83-1aec-4bd4-9b46-f02deb628018/volumes" Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.884323 5008 generic.go:334] "Generic (PLEG): container finished" podID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerID="e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987" exitCode=0 Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.884398 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerDied","Data":"e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987"} Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.888433 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerStarted","Data":"9c3f342d019c4b99216e2db36a8519922ee184a93aa73ddc5f5e324d243d11e6"} Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.891622 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerStarted","Data":"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f"} Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.893064 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerID="6fbbb1c70108b41582b5edef8de3a67424fd51168b22d0d1f5469f11eceefd27" exitCode=0 Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.893115 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerDied","Data":"6fbbb1c70108b41582b5edef8de3a67424fd51168b22d0d1f5469f11eceefd27"} Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.894947 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerDied","Data":"20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528"} Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.894969 5008 generic.go:334] "Generic (PLEG): container finished" podID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerID="20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528" exitCode=0 Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.939364 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tst9c" podStartSLOduration=5.101490329 podStartE2EDuration="1m53.939346405s" podCreationTimestamp="2026-01-29 15:30:19 +0000 UTC" firstStartedPulling="2026-01-29 15:30:23.095590535 +0000 UTC m=+166.768444772" lastFinishedPulling="2026-01-29 15:32:11.933446611 +0000 UTC m=+275.606300848" observedRunningTime="2026-01-29 15:32:12.937049543 +0000 UTC m=+276.609903810" watchObservedRunningTime="2026-01-29 15:32:12.939346405 +0000 UTC m=+276.612200652" Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.958398 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cwgw5" podStartSLOduration=3.88256389 podStartE2EDuration="1m56.958377532s" podCreationTimestamp="2026-01-29 15:30:16 +0000 UTC" firstStartedPulling="2026-01-29 15:30:19.0207747 +0000 UTC m=+162.693628937" lastFinishedPulling="2026-01-29 15:32:12.096588342 +0000 UTC m=+275.769442579" observedRunningTime="2026-01-29 15:32:12.951700472 +0000 UTC m=+276.624554729" watchObservedRunningTime="2026-01-29 15:32:12.958377532 +0000 UTC m=+276.631231789" Jan 29 15:32:13 crc kubenswrapper[5008]: I0129 15:32:13.903411 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerStarted","Data":"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51"} Jan 29 15:32:13 crc kubenswrapper[5008]: I0129 15:32:13.906012 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerStarted","Data":"ed3317e50ebd56908f1ad0d5cbc15af6b8fc520caee4385415a1615527ccd62b"} Jan 29 15:32:13 crc kubenswrapper[5008]: I0129 15:32:13.927143 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mkxw5" podStartSLOduration=4.68211099 podStartE2EDuration="1m55.927127742s" podCreationTimestamp="2026-01-29 15:30:18 +0000 UTC" firstStartedPulling="2026-01-29 15:30:22.075336143 +0000 UTC m=+165.748190380" lastFinishedPulling="2026-01-29 15:32:13.320352895 +0000 UTC m=+276.993207132" observedRunningTime="2026-01-29 15:32:13.924942713 +0000 UTC m=+277.597796960" watchObservedRunningTime="2026-01-29 15:32:13.927127742 +0000 UTC m=+277.599981979" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.719846 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.720144 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.769603 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.797071 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z9t2h" podStartSLOduration=7.55810387 podStartE2EDuration="2m0.797047638s" podCreationTimestamp="2026-01-29 15:30:16 +0000 UTC" firstStartedPulling="2026-01-29 15:30:20.057048301 +0000 UTC m=+163.729902528" lastFinishedPulling="2026-01-29 15:32:13.295992059 +0000 UTC m=+276.968846296" observedRunningTime="2026-01-29 15:32:14.933365013 +0000 UTC m=+278.606219250" watchObservedRunningTime="2026-01-29 15:32:16.797047638 +0000 UTC m=+280.469901925" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.858098 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-585448bccb-4m9fq"] Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.858347 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" podUID="64612440-e59b-46bb-a60f-f10989166e58" containerName="controller-manager" containerID="cri-o://40321afd189e235fc1bb78923d74cb98e8fe85b88b55f9bd3844976bd07eb0f5" gracePeriod=30 Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.955965 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.956012 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.956290 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4"] Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.956497 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" containerName="route-controller-manager" containerID="cri-o://dbb82c43ba7943df2747aa78a2127da4c2cba3ad40144842a2f920c5e71f8479" gracePeriod=30 Jan 29 15:32:17 crc kubenswrapper[5008]: I0129 15:32:17.004586 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:17.930037 5008 generic.go:334] "Generic (PLEG): container finished" podID="64612440-e59b-46bb-a60f-f10989166e58" containerID="40321afd189e235fc1bb78923d74cb98e8fe85b88b55f9bd3844976bd07eb0f5" exitCode=0 Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:17.930127 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" event={"ID":"64612440-e59b-46bb-a60f-f10989166e58","Type":"ContainerDied","Data":"40321afd189e235fc1bb78923d74cb98e8fe85b88b55f9bd3844976bd07eb0f5"} Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:17.931957 5008 generic.go:334] "Generic (PLEG): container finished" podID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" containerID="dbb82c43ba7943df2747aa78a2127da4c2cba3ad40144842a2f920c5e71f8479" exitCode=0 Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:17.932024 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" event={"ID":"bf35ff68-68b3-4743-803f-e451a5f5c5bd","Type":"ContainerDied","Data":"dbb82c43ba7943df2747aa78a2127da4c2cba3ad40144842a2f920c5e71f8479"} Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.821325 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.821738 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.872434 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.938252 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" event={"ID":"64612440-e59b-46bb-a60f-f10989166e58","Type":"ContainerDied","Data":"cbb9854cfe9f99d27e1796a8bf85e10b2281996e9b1dad79a2b1e102f79ba6c3"} Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.938286 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbb9854cfe9f99d27e1796a8bf85e10b2281996e9b1dad79a2b1e102f79ba6c3" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.940672 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerStarted","Data":"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024"} Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.951247 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.985748 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:32:18 crc kubenswrapper[5008]: E0129 15:32:18.986175 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="registry-server" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986278 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="registry-server" Jan 29 15:32:18 crc kubenswrapper[5008]: E0129 15:32:18.986352 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="extract-utilities" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986430 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="extract-utilities" Jan 29 15:32:18 crc kubenswrapper[5008]: E0129 15:32:18.986511 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="extract-content" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986572 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="extract-content" Jan 29 15:32:18 crc kubenswrapper[5008]: E0129 15:32:18.986628 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64612440-e59b-46bb-a60f-f10989166e58" containerName="controller-manager" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986688 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="64612440-e59b-46bb-a60f-f10989166e58" containerName="controller-manager" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986875 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="registry-server" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986959 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="64612440-e59b-46bb-a60f-f10989166e58" containerName="controller-manager" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.987411 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.000541 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.012464 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.050775 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert\") pod \"64612440-e59b-46bb-a60f-f10989166e58\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.050830 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config\") pod \"64612440-e59b-46bb-a60f-f10989166e58\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.050894 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles\") pod \"64612440-e59b-46bb-a60f-f10989166e58\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.050913 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st2l6\" (UniqueName: \"kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6\") pod \"64612440-e59b-46bb-a60f-f10989166e58\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.052038 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "64612440-e59b-46bb-a60f-f10989166e58" (UID: "64612440-e59b-46bb-a60f-f10989166e58"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.052171 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config" (OuterVolumeSpecName: "config") pod "64612440-e59b-46bb-a60f-f10989166e58" (UID: "64612440-e59b-46bb-a60f-f10989166e58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.056947 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6" (OuterVolumeSpecName: "kube-api-access-st2l6") pod "64612440-e59b-46bb-a60f-f10989166e58" (UID: "64612440-e59b-46bb-a60f-f10989166e58"). InnerVolumeSpecName "kube-api-access-st2l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.058021 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "64612440-e59b-46bb-a60f-f10989166e58" (UID: "64612440-e59b-46bb-a60f-f10989166e58"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152396 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca\") pod \"64612440-e59b-46bb-a60f-f10989166e58\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152540 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152584 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152601 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhk7\" (UniqueName: \"kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152635 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152658 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152707 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152717 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152727 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152735 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st2l6\" (UniqueName: \"kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152905 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca" (OuterVolumeSpecName: "client-ca") pod "64612440-e59b-46bb-a60f-f10989166e58" (UID: "64612440-e59b-46bb-a60f-f10989166e58"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.202580 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254200 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254240 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxhk7\" (UniqueName: \"kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254278 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254302 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254666 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254812 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.256083 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.256132 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.256977 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.260601 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.275643 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxhk7\" (UniqueName: \"kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.302311 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.354055 5008 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.354265 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" containerName="route-controller-manager" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.354281 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" containerName="route-controller-manager" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.354376 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" containerName="route-controller-manager" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.354637 5008 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.354684 5008 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355178 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config\") pod \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355222 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca\") pod \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355253 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert\") pod \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355273 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw5nn\" (UniqueName: \"kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn\") pod \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355939 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d" gracePeriod=15 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355966 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2" gracePeriod=15 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.356045 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0" gracePeriod=15 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.356078 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7" gracePeriod=15 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.356134 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.356566 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c" gracePeriod=15 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.356943 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config" (OuterVolumeSpecName: "config") pod "bf35ff68-68b3-4743-803f-e451a5f5c5bd" (UID: "bf35ff68-68b3-4743-803f-e451a5f5c5bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357596 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357615 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357629 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357634 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357645 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357651 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357882 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357887 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357895 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357901 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357912 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357917 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357925 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357931 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.360106 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bf35ff68-68b3-4743-803f-e451a5f5c5bd" (UID: "bf35ff68-68b3-4743-803f-e451a5f5c5bd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.360419 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca" (OuterVolumeSpecName: "client-ca") pod "bf35ff68-68b3-4743-803f-e451a5f5c5bd" (UID: "bf35ff68-68b3-4743-803f-e451a5f5c5bd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.364601 5008 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368677 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368708 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368721 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368742 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368760 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368774 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.369091 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.369103 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.369289 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.404942 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.411624 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn" (OuterVolumeSpecName: "kube-api-access-mw5nn") pod "bf35ff68-68b3-4743-803f-e451a5f5c5bd" (UID: "bf35ff68-68b3-4743-803f-e451a5f5c5bd"). InnerVolumeSpecName "kube-api-access-mw5nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460238 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460306 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460584 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460619 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460669 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460692 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460742 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460820 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460876 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460891 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460902 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460913 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw5nn\" (UniqueName: \"kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563376 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563696 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563721 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563738 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563768 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563800 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563824 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563842 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563898 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563894 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563933 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563498 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563954 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563970 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563976 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563999 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.694553 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: W0129 15:32:19.710829 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-031631225f1ca18b081aa08300f1896d4e4f3792d561cb5553da09b79d07d25d WatchSource:0}: Error finding container 031631225f1ca18b081aa08300f1896d4e4f3792d561cb5553da09b79d07d25d: Status 404 returned error can't find the container with id 031631225f1ca18b081aa08300f1896d4e4f3792d561cb5553da09b79d07d25d Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.713770 5008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3d724d56d6e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,LastTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.804034 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.804090 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.847513 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.848028 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.848316 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.953037 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.955554 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.972448 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c" exitCode=0 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.972682 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d" exitCode=0 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.972751 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2" exitCode=0 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.972834 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0" exitCode=2 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.972960 5008 scope.go:117] "RemoveContainer" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.980232 5008 generic.go:334] "Generic (PLEG): container finished" podID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" containerID="7c398ab151812dfd065f5ce688e5a1aab9c54766a8265004ad57f01a071e1896" exitCode=0 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.980517 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af4b11bc-2d2f-4e68-ab59-cbc08fecba52","Type":"ContainerDied","Data":"7c398ab151812dfd065f5ce688e5a1aab9c54766a8265004ad57f01a071e1896"} Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.981182 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.981481 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.981769 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.984048 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" event={"ID":"bf35ff68-68b3-4743-803f-e451a5f5c5bd","Type":"ContainerDied","Data":"151a001a83e99402752792ff1d9b03e857965ca404f04dce980c55396aacc517"} Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.984109 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.984700 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.985216 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.985563 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.985815 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.985974 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"031631225f1ca18b081aa08300f1896d4e4f3792d561cb5553da09b79d07d25d"} Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.986050 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.986235 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.986243 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.986495 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.986853 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.987183 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.987651 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.987944 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.988191 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.988468 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.988734 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.989058 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.002555 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.002839 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.003393 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.003625 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.003956 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.004221 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.004526 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.004808 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.005094 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.005320 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.005574 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.005810 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.027370 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.028276 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.028655 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.028700 5008 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 15:32:20 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72" Netns:"/var/run/netns/2b7ba383-8a1d-4a1b-8df0-841f1e10d4c2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:20 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:20 crc kubenswrapper[5008]: > Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.028943 5008 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 15:32:20 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72" Netns:"/var/run/netns/2b7ba383-8a1d-4a1b-8df0-841f1e10d4c2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:20 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:20 crc kubenswrapper[5008]: > pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.028966 5008 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 29 15:32:20 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72" Netns:"/var/run/netns/2b7ba383-8a1d-4a1b-8df0-841f1e10d4c2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:20 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:20 crc kubenswrapper[5008]: > pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.029014 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-67df9d9956-9zzpb_openshift-controller-manager(17f45bda-9243-4ae2-858a-e32e62abeebc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-67df9d9956-9zzpb_openshift-controller-manager(17f45bda-9243-4ae2-858a-e32e62abeebc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72\\\" Netns:\\\"/var/run/netns/2b7ba383-8a1d-4a1b-8df0-841f1e10d4c2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s\\\": dial tcp 38.102.83.50:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.029866 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.030142 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.030495 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.030875 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.225897 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.227120 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.227595 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.228046 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.228430 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.228899 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.229151 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.229446 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.270625 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.271684 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.272127 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.272400 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.272659 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.272870 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.273095 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.273339 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.682240 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:20Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:20Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:20Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:20Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:15db2d5dee506f58d0ee5bf1684107211c0473c43ef6111e13df0c55850f77c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:acd62b9cbbc1168a7c81182ba747850ea67c24294a6703fb341471191da484f8\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1676237031},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:40a0af9b58137c413272f3533763f7affd5db97e6ef410a6aeabce6d81a246ee\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7e9b6f6bdbfa69f6106bc85eaee51d908ede4be851b578362af443af6bf732a8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202031349},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:06acdd148ddfe14125d9ab253b9eb0dca1930047787f5b277a21bc88cdfd5030\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a649014abb6de45bd5e9eba64d76cf536ed766c876c58c0e1388115bafecf763\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185399018},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.683252 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.683766 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.684302 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.684771 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.684864 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.991127 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.992311 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.292005 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.292711 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.292926 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.293241 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.293775 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.294172 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.294516 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.294814 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: E0129 15:32:21.388152 5008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3d724d56d6e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,LastTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.491635 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock\") pod \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.492032 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir\") pod \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.492186 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access\") pod \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.492548 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock" (OuterVolumeSpecName: "var-lock") pod "af4b11bc-2d2f-4e68-ab59-cbc08fecba52" (UID: "af4b11bc-2d2f-4e68-ab59-cbc08fecba52"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.492611 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af4b11bc-2d2f-4e68-ab59-cbc08fecba52" (UID: "af4b11bc-2d2f-4e68-ab59-cbc08fecba52"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.524900 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af4b11bc-2d2f-4e68-ab59-cbc08fecba52" (UID: "af4b11bc-2d2f-4e68-ab59-cbc08fecba52"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.593939 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.593970 5008 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.593979 5008 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:21 crc kubenswrapper[5008]: E0129 15:32:21.738023 5008 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 15:32:21 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0" Netns:"/var/run/netns/1ed2d01f-3b8f-4ca0-9954-8f91c613d415" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:21 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:21 crc kubenswrapper[5008]: > Jan 29 15:32:21 crc kubenswrapper[5008]: E0129 15:32:21.738104 5008 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 15:32:21 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0" Netns:"/var/run/netns/1ed2d01f-3b8f-4ca0-9954-8f91c613d415" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:21 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:21 crc kubenswrapper[5008]: > pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:21 crc kubenswrapper[5008]: E0129 15:32:21.738127 5008 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 29 15:32:21 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0" Netns:"/var/run/netns/1ed2d01f-3b8f-4ca0-9954-8f91c613d415" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:21 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:21 crc kubenswrapper[5008]: > pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:21 crc kubenswrapper[5008]: E0129 15:32:21.738188 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-67df9d9956-9zzpb_openshift-controller-manager(17f45bda-9243-4ae2-858a-e32e62abeebc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-67df9d9956-9zzpb_openshift-controller-manager(17f45bda-9243-4ae2-858a-e32e62abeebc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0\\\" Netns:\\\"/var/run/netns/1ed2d01f-3b8f-4ca0-9954-8f91c613d415\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s\\\": dial tcp 38.102.83.50:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.009346 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af4b11bc-2d2f-4e68-ab59-cbc08fecba52","Type":"ContainerDied","Data":"3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753"} Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.009671 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.009983 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.057569 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.058460 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.059019 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.059643 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.060851 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.061566 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.062351 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.442300 5008 scope.go:117] "RemoveContainer" containerID="dbb82c43ba7943df2747aa78a2127da4c2cba3ad40144842a2f920c5e71f8479" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.019648 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"91c8b8e183ceb639dc42455dc6714f740f7596aa5a568725b22cbea1339a8752"} Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.020154 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.020589 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.020870 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.021174 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.021476 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.021692 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.022064 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.023941 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.025036 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7" exitCode=0 Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.182048 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.182621 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.183115 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.183274 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.183476 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.183717 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.183943 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.184144 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.184343 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.184650 5008 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.318936 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319001 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319046 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319096 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319190 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319328 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319484 5008 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319508 5008 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319527 5008 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.334911 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.036021 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.037709 5008 scope.go:117] "RemoveContainer" containerID="412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.037863 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.039233 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.039974 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.040551 5008 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.041898 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.042803 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.043987 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.044905 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.045635 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.046652 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.047347 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.047902 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.048405 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.048930 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.049513 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.050075 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.050550 5008 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.057058 5008 scope.go:117] "RemoveContainer" containerID="3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.071982 5008 scope.go:117] "RemoveContainer" containerID="5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.089339 5008 scope.go:117] "RemoveContainer" containerID="677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.104952 5008 scope.go:117] "RemoveContainer" containerID="4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.124829 5008 scope.go:117] "RemoveContainer" containerID="25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.772182 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.773590 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.774121 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.774420 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.774682 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.774987 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.775280 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.775524 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.775778 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.028309 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.029190 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.029734 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.030499 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.031275 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.031754 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.032290 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.032746 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.033270 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.033845 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.328967 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.330016 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.330677 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.331337 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.331908 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.332350 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.332876 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.333566 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.334041 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.452112 5008 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.452637 5008 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.453332 5008 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.454031 5008 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.454461 5008 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: I0129 15:32:28.454532 5008 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.455094 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="200ms" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.656064 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="400ms" Jan 29 15:32:29 crc kubenswrapper[5008]: E0129 15:32:29.057766 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="800ms" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.138028 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.139141 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.206498 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.207365 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.208052 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.208612 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.209094 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.209488 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.209975 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.210453 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.211018 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.211478 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: E0129 15:32:29.858977 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="1.6s" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.117712 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.118568 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.119099 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.119582 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.120213 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.120922 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.121390 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.121919 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.122498 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.123341 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.000073 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:30Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:15db2d5dee506f58d0ee5bf1684107211c0473c43ef6111e13df0c55850f77c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:acd62b9cbbc1168a7c81182ba747850ea67c24294a6703fb341471191da484f8\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1676237031},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:40a0af9b58137c413272f3533763f7affd5db97e6ef410a6aeabce6d81a246ee\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7e9b6f6bdbfa69f6106bc85eaee51d908ede4be851b578362af443af6bf732a8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202031349},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:06acdd148ddfe14125d9ab253b9eb0dca1930047787f5b277a21bc88cdfd5030\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a649014abb6de45bd5e9eba64d76cf536ed766c876c58c0e1388115bafecf763\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185399018},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.001567 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.002072 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.002489 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.002933 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.002981 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.390017 5008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3d724d56d6e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,LastTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.460091 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="3.2s" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.106395 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.106479 5008 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca" exitCode=1 Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.106540 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca"} Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.107326 5008 scope.go:117] "RemoveContainer" containerID="c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.107976 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.108418 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.108874 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.109518 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.117219 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.118059 5008 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.118604 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.119025 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.119809 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.129499 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.115067 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.115362 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5764ee38ac7740acad09b2b6419d8e3dc71434980dac60260fe3d6dd067682f4"} Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.116685 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.117416 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.117915 5008 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.118396 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.118872 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.119252 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.119544 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.119855 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.120161 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.120455 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.322726 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.323904 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.324589 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.325341 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.325833 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.326351 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.326859 5008 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.327349 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.327931 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.328468 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.329030 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.342964 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.343026 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:34 crc kubenswrapper[5008]: E0129 15:32:34.343560 5008 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.344273 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:34 crc kubenswrapper[5008]: W0129 15:32:34.375593 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-1df3554e8c15c141cb2b6211852af25fcbdcccb5a410c3e992a90bab5a6d4263 WatchSource:0}: Error finding container 1df3554e8c15c141cb2b6211852af25fcbdcccb5a410c3e992a90bab5a6d4263: Status 404 returned error can't find the container with id 1df3554e8c15c141cb2b6211852af25fcbdcccb5a410c3e992a90bab5a6d4263 Jan 29 15:32:34 crc kubenswrapper[5008]: E0129 15:32:34.661734 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="6.4s" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.123679 5008 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="31a4d285f97f87314a2f653c2e112a58f2c450ea69c61ed5f562a53d36a3bc5c" exitCode=0 Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.123727 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"31a4d285f97f87314a2f653c2e112a58f2c450ea69c61ed5f562a53d36a3bc5c"} Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.123755 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1df3554e8c15c141cb2b6211852af25fcbdcccb5a410c3e992a90bab5a6d4263"} Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.124184 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.124211 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.124818 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: E0129 15:32:35.124866 5008 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.125206 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.125719 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.126050 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.126395 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.126600 5008 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.126888 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.127250 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.127810 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.128149 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.322855 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.323288 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.875840 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.879216 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:32:36 crc kubenswrapper[5008]: I0129 15:32:36.134192 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e37b92d8db62917c30f3a25b7211db52e69ef372abeabb774760c1ea044d6ce2"} Jan 29 15:32:36 crc kubenswrapper[5008]: I0129 15:32:36.134242 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e8fb6efea22bd5a89d979de81f82ae27054eb0218de4e7ce13ec8133a6f83fa3"} Jan 29 15:32:36 crc kubenswrapper[5008]: I0129 15:32:36.134260 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ed902175e4fe84e95e8ef5e5b1839935090008d516f3f7b4f665c490f435bea0"} Jan 29 15:32:36 crc kubenswrapper[5008]: I0129 15:32:36.134271 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3705f6651751eff3964920cef0b92ba5043891505c354185f88451e8129c849e"} Jan 29 15:32:36 crc kubenswrapper[5008]: I0129 15:32:36.134373 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:32:37 crc kubenswrapper[5008]: I0129 15:32:37.026548 5008 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 15:32:37 crc kubenswrapper[5008]: I0129 15:32:37.143169 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"05ea3cc82ba327e01f17af25eec980aa89be6e7eeec14d2dcc0923ebf84569de"} Jan 29 15:32:37 crc kubenswrapper[5008]: I0129 15:32:37.143669 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:37 crc kubenswrapper[5008]: I0129 15:32:37.143696 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:39 crc kubenswrapper[5008]: I0129 15:32:39.344666 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:39 crc kubenswrapper[5008]: I0129 15:32:39.344982 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:39 crc kubenswrapper[5008]: I0129 15:32:39.358105 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:41 crc kubenswrapper[5008]: W0129 15:32:41.841473 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17f45bda_9243_4ae2_858a_e32e62abeebc.slice/crio-ff460f6e4a20ab94042fb5b7e4ffa51bff723245acb3725b04c391036ec1f691 WatchSource:0}: Error finding container ff460f6e4a20ab94042fb5b7e4ffa51bff723245acb3725b04c391036ec1f691: Status 404 returned error can't find the container with id ff460f6e4a20ab94042fb5b7e4ffa51bff723245acb3725b04c391036ec1f691 Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.152640 5008 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.175298 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" event={"ID":"17f45bda-9243-4ae2-858a-e32e62abeebc","Type":"ContainerStarted","Data":"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea"} Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.175354 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" event={"ID":"17f45bda-9243-4ae2-858a-e32e62abeebc","Type":"ContainerStarted","Data":"ff460f6e4a20ab94042fb5b7e4ffa51bff723245acb3725b04c391036ec1f691"} Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.175706 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.175777 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.175806 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.180057 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.216430 5008 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="eb24cea0-8aff-4f3f-809b-ea8aee184ece" Jan 29 15:32:43 crc kubenswrapper[5008]: I0129 15:32:43.182431 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:43 crc kubenswrapper[5008]: I0129 15:32:43.182495 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:47 crc kubenswrapper[5008]: I0129 15:32:47.337825 5008 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="eb24cea0-8aff-4f3f-809b-ea8aee184ece" Jan 29 15:32:48 crc kubenswrapper[5008]: I0129 15:32:48.706615 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:32:49 crc kubenswrapper[5008]: I0129 15:32:49.303189 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:49 crc kubenswrapper[5008]: I0129 15:32:49.309870 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:51 crc kubenswrapper[5008]: I0129 15:32:51.503960 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 15:32:51 crc kubenswrapper[5008]: I0129 15:32:51.895665 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 15:32:51 crc kubenswrapper[5008]: I0129 15:32:51.968167 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 15:32:52 crc kubenswrapper[5008]: I0129 15:32:52.557744 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:32:52 crc kubenswrapper[5008]: I0129 15:32:52.582876 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 15:32:52 crc kubenswrapper[5008]: I0129 15:32:52.858595 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 15:32:52 crc kubenswrapper[5008]: I0129 15:32:52.995314 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.188653 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.280575 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.414889 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.492755 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.548689 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.658511 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.762517 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.849709 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.965062 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.089378 5008 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.225007 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.317094 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.520889 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.647543 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.724925 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.854899 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.886292 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.013634 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.045327 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.285745 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.328165 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.378603 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.441026 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.548453 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.561841 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.586344 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.641219 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.832718 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.954320 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.043577 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.090553 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.097446 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.119191 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.253133 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.268267 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.330218 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.337356 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.363919 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.398177 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.413462 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.484198 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.584272 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.635881 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.705949 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.714690 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.771501 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.809341 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.818708 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.833714 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.974737 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.975259 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.991464 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.127307 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.133424 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.154892 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.183423 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.193180 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.238283 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.541842 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.581521 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.645634 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.683198 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.715618 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.781618 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.819422 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.865914 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.909217 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.012748 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.053217 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.101309 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.115903 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.118180 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.162996 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.169996 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.319402 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.345338 5008 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.365074 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.382410 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.453638 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.509157 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.572746 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.642737 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.861755 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.943937 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.956208 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.965454 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.261593 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.287955 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.304412 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.380706 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.447589 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.470485 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.476399 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.545307 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.570602 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.580924 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.637594 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.658732 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.679758 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.853309 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.917184 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.960587 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.025775 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.089459 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.093354 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.111263 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.232277 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.370447 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.581561 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.718545 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.720191 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.731171 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.736881 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.749991 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.844677 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.856399 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.907455 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.974979 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.001659 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.047507 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.070359 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.146360 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.188122 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.306338 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.343704 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.380416 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.402823 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.589129 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.672115 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.677410 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.726193 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.730971 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.773898 5008 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.894136 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.974053 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.005853 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.007662 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.027839 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.052761 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.057558 5008 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.090582 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.115531 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.159371 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.261196 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.349360 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.489015 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.599230 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.623156 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.647524 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.653467 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.740083 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.789099 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.857194 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.870564 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.879088 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.957133 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.023502 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.045500 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.057287 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.060684 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.073369 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.220007 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.285815 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.302969 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.310689 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.500884 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.541841 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.560040 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.569930 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.676969 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.676981 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.685250 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.794988 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.837715 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.848491 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.854553 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.954512 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.968689 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.011444 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.019125 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.055351 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.102178 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.126049 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.222135 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.243064 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.316727 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.346286 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.440903 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.455002 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.469206 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.523888 5008 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.583899 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.588114 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.640492 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.692978 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.709758 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.734054 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.949696 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.005072 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.035627 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.115922 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.218848 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.278289 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.323959 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.427817 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.482580 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.489986 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.506461 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.552698 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.607654 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.764731 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.850701 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.910080 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.945577 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.955008 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.087746 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.252983 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.324393 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.589128 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.607645 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.745870 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.801838 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.820868 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.922513 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.004686 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.073546 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.247591 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.265389 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.352630 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.500380 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.658599 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.685159 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.820856 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.978765 5008 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.979459 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=48.979426954 podStartE2EDuration="48.979426954s" podCreationTimestamp="2026-01-29 15:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:32:41.881920145 +0000 UTC m=+305.554774382" watchObservedRunningTime="2026-01-29 15:33:07.979426954 +0000 UTC m=+331.652281281" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.983560 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fd6nq" podStartSLOduration=54.132814111 podStartE2EDuration="2m49.9835435s" podCreationTimestamp="2026-01-29 15:30:18 +0000 UTC" firstStartedPulling="2026-01-29 15:30:22.076005481 +0000 UTC m=+165.748859718" lastFinishedPulling="2026-01-29 15:32:17.92673483 +0000 UTC m=+281.599589107" observedRunningTime="2026-01-29 15:32:41.908276116 +0000 UTC m=+305.581130363" watchObservedRunningTime="2026-01-29 15:33:07.9835435 +0000 UTC m=+331.656397767" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.987036 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" podStartSLOduration=51.987018185 podStartE2EDuration="51.987018185s" podCreationTimestamp="2026-01-29 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:32:42.210654702 +0000 UTC m=+305.883508939" watchObservedRunningTime="2026-01-29 15:33:07.987018185 +0000 UTC m=+331.659872462" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988069 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4","openshift-controller-manager/controller-manager-585448bccb-4m9fq"] Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988157 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:33:07 crc kubenswrapper[5008]: E0129 15:33:07.988521 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" containerName="installer" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988552 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" containerName="installer" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988585 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988604 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988736 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" containerName="installer" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.989292 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.989501 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.995700 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.996868 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.996907 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.997134 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.997481 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.997701 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.998008 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.000960 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.021458 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.021434226 podStartE2EDuration="26.021434226s" podCreationTimestamp="2026-01-29 15:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:08.017327239 +0000 UTC m=+331.690181476" watchObservedRunningTime="2026-01-29 15:33:08.021434226 +0000 UTC m=+331.694288503" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.122329 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.122492 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jffgv\" (UniqueName: \"kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.122683 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.122808 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.224201 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.224318 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.224389 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.224438 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jffgv\" (UniqueName: \"kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.225883 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.226363 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.240017 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.245349 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jffgv\" (UniqueName: \"kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.310995 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.589590 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 15:33:09 crc kubenswrapper[5008]: I0129 15:33:09.331976 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64612440-e59b-46bb-a60f-f10989166e58" path="/var/lib/kubelet/pods/64612440-e59b-46bb-a60f-f10989166e58/volumes" Jan 29 15:33:09 crc kubenswrapper[5008]: I0129 15:33:09.332659 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" path="/var/lib/kubelet/pods/bf35ff68-68b3-4743-803f-e451a5f5c5bd/volumes" Jan 29 15:33:11 crc kubenswrapper[5008]: E0129 15:33:11.301960 5008 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 15:33:11 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager_50ca549b-5e64-416a-866b-1f63371db9dd_0(d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb): error adding pod openshift-route-controller-manager_route-controller-manager-7d5696789b-pvrc7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb" Netns:"/var/run/netns/97bcb98b-90aa-42dd-9855-3fa0a261fad6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7d5696789b-pvrc7;K8S_POD_INFRA_CONTAINER_ID=d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb;K8S_POD_UID=50ca549b-5e64-416a-866b-1f63371db9dd" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7/50ca549b-5e64-416a-866b-1f63371db9dd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7d5696789b-pvrc7 in out of cluster comm: pod "route-controller-manager-7d5696789b-pvrc7" not found Jan 29 15:33:11 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:33:11 crc kubenswrapper[5008]: > Jan 29 15:33:11 crc kubenswrapper[5008]: E0129 15:33:11.302407 5008 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 15:33:11 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager_50ca549b-5e64-416a-866b-1f63371db9dd_0(d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb): error adding pod openshift-route-controller-manager_route-controller-manager-7d5696789b-pvrc7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb" Netns:"/var/run/netns/97bcb98b-90aa-42dd-9855-3fa0a261fad6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7d5696789b-pvrc7;K8S_POD_INFRA_CONTAINER_ID=d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb;K8S_POD_UID=50ca549b-5e64-416a-866b-1f63371db9dd" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7/50ca549b-5e64-416a-866b-1f63371db9dd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7d5696789b-pvrc7 in out of cluster comm: pod "route-controller-manager-7d5696789b-pvrc7" not found Jan 29 15:33:11 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:33:11 crc kubenswrapper[5008]: > pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:11 crc kubenswrapper[5008]: E0129 15:33:11.302426 5008 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 29 15:33:11 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager_50ca549b-5e64-416a-866b-1f63371db9dd_0(d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb): error adding pod openshift-route-controller-manager_route-controller-manager-7d5696789b-pvrc7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb" Netns:"/var/run/netns/97bcb98b-90aa-42dd-9855-3fa0a261fad6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7d5696789b-pvrc7;K8S_POD_INFRA_CONTAINER_ID=d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb;K8S_POD_UID=50ca549b-5e64-416a-866b-1f63371db9dd" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7/50ca549b-5e64-416a-866b-1f63371db9dd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7d5696789b-pvrc7 in out of cluster comm: pod "route-controller-manager-7d5696789b-pvrc7" not found Jan 29 15:33:11 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:33:11 crc kubenswrapper[5008]: > pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:11 crc kubenswrapper[5008]: E0129 15:33:11.302482 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager(50ca549b-5e64-416a-866b-1f63371db9dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager(50ca549b-5e64-416a-866b-1f63371db9dd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager_50ca549b-5e64-416a-866b-1f63371db9dd_0(d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb): error adding pod openshift-route-controller-manager_route-controller-manager-7d5696789b-pvrc7 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb\\\" Netns:\\\"/var/run/netns/97bcb98b-90aa-42dd-9855-3fa0a261fad6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7d5696789b-pvrc7;K8S_POD_INFRA_CONTAINER_ID=d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb;K8S_POD_UID=50ca549b-5e64-416a-866b-1f63371db9dd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7/50ca549b-5e64-416a-866b-1f63371db9dd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7d5696789b-pvrc7 in out of cluster comm: pod \\\"route-controller-manager-7d5696789b-pvrc7\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" podUID="50ca549b-5e64-416a-866b-1f63371db9dd" Jan 29 15:33:15 crc kubenswrapper[5008]: I0129 15:33:15.875918 5008 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:33:15 crc kubenswrapper[5008]: I0129 15:33:15.876367 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://91c8b8e183ceb639dc42455dc6714f740f7596aa5a568725b22cbea1339a8752" gracePeriod=5 Jan 29 15:33:16 crc kubenswrapper[5008]: I0129 15:33:16.845477 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:33:16 crc kubenswrapper[5008]: I0129 15:33:16.845722 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" containerName="controller-manager" containerID="cri-o://7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea" gracePeriod=30 Jan 29 15:33:16 crc kubenswrapper[5008]: I0129 15:33:16.950589 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7"] Jan 29 15:33:16 crc kubenswrapper[5008]: I0129 15:33:16.950741 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:16 crc kubenswrapper[5008]: I0129 15:33:16.983553 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.150686 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jffgv\" (UniqueName: \"kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv\") pod \"50ca549b-5e64-416a-866b-1f63371db9dd\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.150970 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca\") pod \"50ca549b-5e64-416a-866b-1f63371db9dd\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.150991 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert\") pod \"50ca549b-5e64-416a-866b-1f63371db9dd\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.151073 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config\") pod \"50ca549b-5e64-416a-866b-1f63371db9dd\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.151656 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca" (OuterVolumeSpecName: "client-ca") pod "50ca549b-5e64-416a-866b-1f63371db9dd" (UID: "50ca549b-5e64-416a-866b-1f63371db9dd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.151904 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config" (OuterVolumeSpecName: "config") pod "50ca549b-5e64-416a-866b-1f63371db9dd" (UID: "50ca549b-5e64-416a-866b-1f63371db9dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.157068 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv" (OuterVolumeSpecName: "kube-api-access-jffgv") pod "50ca549b-5e64-416a-866b-1f63371db9dd" (UID: "50ca549b-5e64-416a-866b-1f63371db9dd"). InnerVolumeSpecName "kube-api-access-jffgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.160987 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "50ca549b-5e64-416a-866b-1f63371db9dd" (UID: "50ca549b-5e64-416a-866b-1f63371db9dd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.252575 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.252614 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jffgv\" (UniqueName: \"kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.252626 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.252635 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.302917 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.401234 5008 generic.go:334] "Generic (PLEG): container finished" podID="17f45bda-9243-4ae2-858a-e32e62abeebc" containerID="7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea" exitCode=0 Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.401326 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.401839 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.401983 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" event={"ID":"17f45bda-9243-4ae2-858a-e32e62abeebc","Type":"ContainerDied","Data":"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea"} Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.402056 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" event={"ID":"17f45bda-9243-4ae2-858a-e32e62abeebc","Type":"ContainerDied","Data":"ff460f6e4a20ab94042fb5b7e4ffa51bff723245acb3725b04c391036ec1f691"} Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.402296 5008 scope.go:117] "RemoveContainer" containerID="7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.425910 5008 scope.go:117] "RemoveContainer" containerID="7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea" Jan 29 15:33:17 crc kubenswrapper[5008]: E0129 15:33:17.428769 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea\": container with ID starting with 7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea not found: ID does not exist" containerID="7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.428830 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea"} err="failed to get container status \"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea\": rpc error: code = NotFound desc = could not find container \"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea\": container with ID starting with 7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea not found: ID does not exist" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.432306 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7"] Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.436834 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7"] Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.454382 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config\") pod \"17f45bda-9243-4ae2-858a-e32e62abeebc\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.454593 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert\") pod \"17f45bda-9243-4ae2-858a-e32e62abeebc\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.454733 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxhk7\" (UniqueName: \"kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7\") pod \"17f45bda-9243-4ae2-858a-e32e62abeebc\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.454900 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca\") pod \"17f45bda-9243-4ae2-858a-e32e62abeebc\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.455050 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles\") pod \"17f45bda-9243-4ae2-858a-e32e62abeebc\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.455902 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "17f45bda-9243-4ae2-858a-e32e62abeebc" (UID: "17f45bda-9243-4ae2-858a-e32e62abeebc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.455907 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca" (OuterVolumeSpecName: "client-ca") pod "17f45bda-9243-4ae2-858a-e32e62abeebc" (UID: "17f45bda-9243-4ae2-858a-e32e62abeebc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.455931 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config" (OuterVolumeSpecName: "config") pod "17f45bda-9243-4ae2-858a-e32e62abeebc" (UID: "17f45bda-9243-4ae2-858a-e32e62abeebc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.458887 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7" (OuterVolumeSpecName: "kube-api-access-mxhk7") pod "17f45bda-9243-4ae2-858a-e32e62abeebc" (UID: "17f45bda-9243-4ae2-858a-e32e62abeebc"). InnerVolumeSpecName "kube-api-access-mxhk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.458953 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17f45bda-9243-4ae2-858a-e32e62abeebc" (UID: "17f45bda-9243-4ae2-858a-e32e62abeebc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.556239 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.556272 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.556285 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxhk7\" (UniqueName: \"kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.556295 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.556304 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.740206 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.746732 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.163856 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:18 crc kubenswrapper[5008]: E0129 15:33:18.164077 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.164091 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:33:18 crc kubenswrapper[5008]: E0129 15:33:18.164112 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" containerName="controller-manager" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.164122 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" containerName="controller-manager" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.164232 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" containerName="controller-manager" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.164245 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.164649 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.168867 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.168954 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.168867 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.169113 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.169392 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.170738 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.172549 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.173575 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.201865 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.204615 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.206583 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.208447 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.208836 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.209163 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.209450 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.225108 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.234935 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.267055 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.267123 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zchzc\" (UniqueName: \"kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.267150 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.267226 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.267274 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368126 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nrtm\" (UniqueName: \"kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368179 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368374 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368452 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368522 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368831 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368987 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zchzc\" (UniqueName: \"kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.369085 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.369140 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.369415 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.369829 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.370660 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.379132 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.394821 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zchzc\" (UniqueName: \"kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.470578 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.470651 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.470750 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.470811 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nrtm\" (UniqueName: \"kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.472657 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.473602 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.481426 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.489004 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nrtm\" (UniqueName: \"kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.503043 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.521056 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.704595 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.945424 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:18 crc kubenswrapper[5008]: W0129 15:33:18.947761 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f9d2aa9_16d5_44f9_af8b_1afc90aa4f9d.slice/crio-c3a79cd014701fe839847179f647ae029bc947b8fc7408a7aba8909c6df42ca4 WatchSource:0}: Error finding container c3a79cd014701fe839847179f647ae029bc947b8fc7408a7aba8909c6df42ca4: Status 404 returned error can't find the container with id c3a79cd014701fe839847179f647ae029bc947b8fc7408a7aba8909c6df42ca4 Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.330396 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" path="/var/lib/kubelet/pods/17f45bda-9243-4ae2-858a-e32e62abeebc/volumes" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.331074 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ca549b-5e64-416a-866b-1f63371db9dd" path="/var/lib/kubelet/pods/50ca549b-5e64-416a-866b-1f63371db9dd/volumes" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.412842 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" event={"ID":"397801e5-e82c-402b-9d5a-fd7853243b8e","Type":"ContainerStarted","Data":"95a154a30a24540e8c25012c840395e72cefb05eb2ed2f5d55eef559756864c0"} Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.412883 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" event={"ID":"397801e5-e82c-402b-9d5a-fd7853243b8e","Type":"ContainerStarted","Data":"45480fb628d022117f602dfca00d9f038e3fadbd266d68d97c74b5c3565707f3"} Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.413073 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.414569 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" event={"ID":"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d","Type":"ContainerStarted","Data":"d342b236c148d8f1a38327bc0072f32f91cfb96c37b4135ca4e1c23c5b141ffd"} Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.414603 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" event={"ID":"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d","Type":"ContainerStarted","Data":"c3a79cd014701fe839847179f647ae029bc947b8fc7408a7aba8909c6df42ca4"} Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.414772 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.417370 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.444624 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" podStartSLOduration=3.444606795 podStartE2EDuration="3.444606795s" podCreationTimestamp="2026-01-29 15:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:19.440301165 +0000 UTC m=+343.113155412" watchObservedRunningTime="2026-01-29 15:33:19.444606795 +0000 UTC m=+343.117461032" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.464844 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" podStartSLOduration=3.464818842 podStartE2EDuration="3.464818842s" podCreationTimestamp="2026-01-29 15:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:19.45725365 +0000 UTC m=+343.130107917" watchObservedRunningTime="2026-01-29 15:33:19.464818842 +0000 UTC m=+343.137673139" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.757291 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.430077 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.430452 5008 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="91c8b8e183ceb639dc42455dc6714f740f7596aa5a568725b22cbea1339a8752" exitCode=137 Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.489196 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.489274 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.616843 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.616932 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617030 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617075 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617118 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617128 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617176 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617218 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617280 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617712 5008 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617743 5008 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617754 5008 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617765 5008 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.634498 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.719091 5008 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:22 crc kubenswrapper[5008]: I0129 15:33:22.439029 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:33:22 crc kubenswrapper[5008]: I0129 15:33:22.439098 5008 scope.go:117] "RemoveContainer" containerID="91c8b8e183ceb639dc42455dc6714f740f7596aa5a568725b22cbea1339a8752" Jan 29 15:33:22 crc kubenswrapper[5008]: I0129 15:33:22.439247 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.329892 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.330124 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.341650 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.341700 5008 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6d660cc1-4441-4e90-bef9-fe103703354d" Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.346264 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.346300 5008 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6d660cc1-4441-4e90-bef9-fe103703354d" Jan 29 15:33:25 crc kubenswrapper[5008]: I0129 15:33:25.665497 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 15:33:30 crc kubenswrapper[5008]: I0129 15:33:30.208412 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 15:33:30 crc kubenswrapper[5008]: I0129 15:33:30.578740 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 15:33:36 crc kubenswrapper[5008]: I0129 15:33:36.850959 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:36 crc kubenswrapper[5008]: I0129 15:33:36.851505 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" podUID="397801e5-e82c-402b-9d5a-fd7853243b8e" containerName="controller-manager" containerID="cri-o://95a154a30a24540e8c25012c840395e72cefb05eb2ed2f5d55eef559756864c0" gracePeriod=30 Jan 29 15:33:36 crc kubenswrapper[5008]: I0129 15:33:36.878386 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:36 crc kubenswrapper[5008]: I0129 15:33:36.878687 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" podUID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" containerName="route-controller-manager" containerID="cri-o://d342b236c148d8f1a38327bc0072f32f91cfb96c37b4135ca4e1c23c5b141ffd" gracePeriod=30 Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.539705 5008 generic.go:334] "Generic (PLEG): container finished" podID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" containerID="d342b236c148d8f1a38327bc0072f32f91cfb96c37b4135ca4e1c23c5b141ffd" exitCode=0 Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.539923 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" event={"ID":"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d","Type":"ContainerDied","Data":"d342b236c148d8f1a38327bc0072f32f91cfb96c37b4135ca4e1c23c5b141ffd"} Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.543701 5008 generic.go:334] "Generic (PLEG): container finished" podID="397801e5-e82c-402b-9d5a-fd7853243b8e" containerID="95a154a30a24540e8c25012c840395e72cefb05eb2ed2f5d55eef559756864c0" exitCode=0 Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.543757 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" event={"ID":"397801e5-e82c-402b-9d5a-fd7853243b8e","Type":"ContainerDied","Data":"95a154a30a24540e8c25012c840395e72cefb05eb2ed2f5d55eef559756864c0"} Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.979722 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.984578 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019088 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:33:38 crc kubenswrapper[5008]: E0129 15:33:38.019311 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" containerName="route-controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019323 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" containerName="route-controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: E0129 15:33:38.019339 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397801e5-e82c-402b-9d5a-fd7853243b8e" containerName="controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019346 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="397801e5-e82c-402b-9d5a-fd7853243b8e" containerName="controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019435 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" containerName="route-controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019453 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="397801e5-e82c-402b-9d5a-fd7853243b8e" containerName="controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019805 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.021670 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040617 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert\") pod \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040703 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config\") pod \"397801e5-e82c-402b-9d5a-fd7853243b8e\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040746 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles\") pod \"397801e5-e82c-402b-9d5a-fd7853243b8e\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040770 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config\") pod \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040832 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zchzc\" (UniqueName: \"kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc\") pod \"397801e5-e82c-402b-9d5a-fd7853243b8e\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040901 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert\") pod \"397801e5-e82c-402b-9d5a-fd7853243b8e\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040926 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nrtm\" (UniqueName: \"kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm\") pod \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040958 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca\") pod \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.041013 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca\") pod \"397801e5-e82c-402b-9d5a-fd7853243b8e\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.041408 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "397801e5-e82c-402b-9d5a-fd7853243b8e" (UID: "397801e5-e82c-402b-9d5a-fd7853243b8e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.041410 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config" (OuterVolumeSpecName: "config") pod "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" (UID: "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.041717 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" (UID: "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.041889 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca" (OuterVolumeSpecName: "client-ca") pod "397801e5-e82c-402b-9d5a-fd7853243b8e" (UID: "397801e5-e82c-402b-9d5a-fd7853243b8e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.042069 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config" (OuterVolumeSpecName: "config") pod "397801e5-e82c-402b-9d5a-fd7853243b8e" (UID: "397801e5-e82c-402b-9d5a-fd7853243b8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.042134 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.050956 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "397801e5-e82c-402b-9d5a-fd7853243b8e" (UID: "397801e5-e82c-402b-9d5a-fd7853243b8e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.054916 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" (UID: "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.054925 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm" (OuterVolumeSpecName: "kube-api-access-6nrtm") pod "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" (UID: "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d"). InnerVolumeSpecName "kube-api-access-6nrtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.061478 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc" (OuterVolumeSpecName: "kube-api-access-zchzc") pod "397801e5-e82c-402b-9d5a-fd7853243b8e" (UID: "397801e5-e82c-402b-9d5a-fd7853243b8e"). InnerVolumeSpecName "kube-api-access-zchzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.142987 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143033 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcvpb\" (UniqueName: \"kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143069 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143112 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143169 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143221 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143231 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143241 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zchzc\" (UniqueName: \"kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143252 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nrtm\" (UniqueName: \"kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143279 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143289 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143297 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.244687 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.245065 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcvpb\" (UniqueName: \"kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.245209 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.245360 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.246323 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.247137 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.257552 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.261702 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcvpb\" (UniqueName: \"kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.341485 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.551523 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.551519 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" event={"ID":"397801e5-e82c-402b-9d5a-fd7853243b8e","Type":"ContainerDied","Data":"45480fb628d022117f602dfca00d9f038e3fadbd266d68d97c74b5c3565707f3"} Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.551681 5008 scope.go:117] "RemoveContainer" containerID="95a154a30a24540e8c25012c840395e72cefb05eb2ed2f5d55eef559756864c0" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.553030 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" event={"ID":"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d","Type":"ContainerDied","Data":"c3a79cd014701fe839847179f647ae029bc947b8fc7408a7aba8909c6df42ca4"} Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.553090 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.569064 5008 scope.go:117] "RemoveContainer" containerID="d342b236c148d8f1a38327bc0072f32f91cfb96c37b4135ca4e1c23c5b141ffd" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.585164 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.590306 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.594506 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.598455 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.789530 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:33:38 crc kubenswrapper[5008]: W0129 15:33:38.789992 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ffb7e45_37e9_49cf_981c_d88916bba44b.slice/crio-6390f3c64efd012633ef552d358d0db88be60b32e8bc4b6efb83125ea4fe673d WatchSource:0}: Error finding container 6390f3c64efd012633ef552d358d0db88be60b32e8bc4b6efb83125ea4fe673d: Status 404 returned error can't find the container with id 6390f3c64efd012633ef552d358d0db88be60b32e8bc4b6efb83125ea4fe673d Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.330041 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397801e5-e82c-402b-9d5a-fd7853243b8e" path="/var/lib/kubelet/pods/397801e5-e82c-402b-9d5a-fd7853243b8e/volumes" Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.330876 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" path="/var/lib/kubelet/pods/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d/volumes" Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.565479 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" event={"ID":"9ffb7e45-37e9-49cf-981c-d88916bba44b","Type":"ContainerStarted","Data":"c5a81b7d6a5eb5b94e027d72a4da3dbb692c825c9c6bd8260d78e97a8e3f3e2b"} Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.565569 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" event={"ID":"9ffb7e45-37e9-49cf-981c-d88916bba44b","Type":"ContainerStarted","Data":"6390f3c64efd012633ef552d358d0db88be60b32e8bc4b6efb83125ea4fe673d"} Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.565801 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.572952 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.583821 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" podStartSLOduration=3.583767707 podStartE2EDuration="3.583767707s" podCreationTimestamp="2026-01-29 15:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:39.580331554 +0000 UTC m=+363.253185811" watchObservedRunningTime="2026-01-29 15:33:39.583767707 +0000 UTC m=+363.256621964" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.180666 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.181658 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.184968 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.185008 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.185104 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.185670 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.187556 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.189520 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.194498 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.196379 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.273281 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.273528 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.273653 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.273810 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dz6b\" (UniqueName: \"kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.273956 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.374697 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.375758 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.375955 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.376143 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dz6b\" (UniqueName: \"kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.376321 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.377224 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.377346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.377421 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.383640 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.394175 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dz6b\" (UniqueName: \"kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.505184 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.978056 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:33:40 crc kubenswrapper[5008]: W0129 15:33:40.982235 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93ec6db8_09a1_4b3b_900d_867f728452cb.slice/crio-5887ca4850db20b4f0627a5f2b1d2ee4799a7e5d8d086bbb5ed85795193b59c4 WatchSource:0}: Error finding container 5887ca4850db20b4f0627a5f2b1d2ee4799a7e5d8d086bbb5ed85795193b59c4: Status 404 returned error can't find the container with id 5887ca4850db20b4f0627a5f2b1d2ee4799a7e5d8d086bbb5ed85795193b59c4 Jan 29 15:33:41 crc kubenswrapper[5008]: I0129 15:33:41.577083 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" event={"ID":"93ec6db8-09a1-4b3b-900d-867f728452cb","Type":"ContainerStarted","Data":"2e22995b163eebe80e37c0570ab875dae72b5630c85b948dc8057b5763467b37"} Jan 29 15:33:41 crc kubenswrapper[5008]: I0129 15:33:41.577443 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" event={"ID":"93ec6db8-09a1-4b3b-900d-867f728452cb","Type":"ContainerStarted","Data":"5887ca4850db20b4f0627a5f2b1d2ee4799a7e5d8d086bbb5ed85795193b59c4"} Jan 29 15:33:41 crc kubenswrapper[5008]: I0129 15:33:41.577466 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:41 crc kubenswrapper[5008]: I0129 15:33:41.582994 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:41 crc kubenswrapper[5008]: I0129 15:33:41.602064 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" podStartSLOduration=5.602050122 podStartE2EDuration="5.602050122s" podCreationTimestamp="2026-01-29 15:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:41.597247163 +0000 UTC m=+365.270101440" watchObservedRunningTime="2026-01-29 15:33:41.602050122 +0000 UTC m=+365.274904369" Jan 29 15:33:43 crc kubenswrapper[5008]: I0129 15:33:43.991166 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:33:43 crc kubenswrapper[5008]: I0129 15:33:43.991278 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:34:02 crc kubenswrapper[5008]: I0129 15:34:02.856085 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:34:02 crc kubenswrapper[5008]: I0129 15:34:02.857245 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z9t2h" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="registry-server" containerID="cri-o://437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51" gracePeriod=2 Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.381619 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.387557 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities\") pod \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.387635 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5sl4\" (UniqueName: \"kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4\") pod \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.387694 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content\") pod \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.389664 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities" (OuterVolumeSpecName: "utilities") pod "250e7db8-88dd-44fd-8d73-51a6f8f4ba96" (UID: "250e7db8-88dd-44fd-8d73-51a6f8f4ba96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.399127 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4" (OuterVolumeSpecName: "kube-api-access-z5sl4") pod "250e7db8-88dd-44fd-8d73-51a6f8f4ba96" (UID: "250e7db8-88dd-44fd-8d73-51a6f8f4ba96"). InnerVolumeSpecName "kube-api-access-z5sl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.454746 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "250e7db8-88dd-44fd-8d73-51a6f8f4ba96" (UID: "250e7db8-88dd-44fd-8d73-51a6f8f4ba96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.489054 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.489106 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5sl4\" (UniqueName: \"kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.489127 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.721729 5008 generic.go:334] "Generic (PLEG): container finished" podID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerID="437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51" exitCode=0 Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.721804 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerDied","Data":"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51"} Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.721840 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerDied","Data":"616df5323044bc3ebd3a98d75f3ea061e944f69d5bc62803ba635bd69dee1996"} Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.721845 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.721858 5008 scope.go:117] "RemoveContainer" containerID="437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.748807 5008 scope.go:117] "RemoveContainer" containerID="e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.759536 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.763980 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.787202 5008 scope.go:117] "RemoveContainer" containerID="e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.810457 5008 scope.go:117] "RemoveContainer" containerID="437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51" Jan 29 15:34:03 crc kubenswrapper[5008]: E0129 15:34:03.811209 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51\": container with ID starting with 437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51 not found: ID does not exist" containerID="437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.811264 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51"} err="failed to get container status \"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51\": rpc error: code = NotFound desc = could not find container \"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51\": container with ID starting with 437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51 not found: ID does not exist" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.811297 5008 scope.go:117] "RemoveContainer" containerID="e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987" Jan 29 15:34:03 crc kubenswrapper[5008]: E0129 15:34:03.811894 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987\": container with ID starting with e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987 not found: ID does not exist" containerID="e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.811959 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987"} err="failed to get container status \"e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987\": rpc error: code = NotFound desc = could not find container \"e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987\": container with ID starting with e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987 not found: ID does not exist" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.811995 5008 scope.go:117] "RemoveContainer" containerID="e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9" Jan 29 15:34:03 crc kubenswrapper[5008]: E0129 15:34:03.812403 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9\": container with ID starting with e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9 not found: ID does not exist" containerID="e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.812427 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9"} err="failed to get container status \"e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9\": rpc error: code = NotFound desc = could not find container \"e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9\": container with ID starting with e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9 not found: ID does not exist" Jan 29 15:34:04 crc kubenswrapper[5008]: I0129 15:34:04.030681 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.045291 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.045564 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fd6nq" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="registry-server" containerID="cri-o://a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024" gracePeriod=2 Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.248325 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.248702 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lhtht" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="registry-server" containerID="cri-o://a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee" gracePeriod=2 Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.345460 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" path="/var/lib/kubelet/pods/250e7db8-88dd-44fd-8d73-51a6f8f4ba96/volumes" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.501650 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.516418 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw6k4\" (UniqueName: \"kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4\") pod \"37742fc9-fce4-41f0-ba04-7232b6e647a7\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.516479 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities\") pod \"37742fc9-fce4-41f0-ba04-7232b6e647a7\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.516522 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content\") pod \"37742fc9-fce4-41f0-ba04-7232b6e647a7\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.517498 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities" (OuterVolumeSpecName: "utilities") pod "37742fc9-fce4-41f0-ba04-7232b6e647a7" (UID: "37742fc9-fce4-41f0-ba04-7232b6e647a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.538241 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37742fc9-fce4-41f0-ba04-7232b6e647a7" (UID: "37742fc9-fce4-41f0-ba04-7232b6e647a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.538634 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4" (OuterVolumeSpecName: "kube-api-access-lw6k4") pod "37742fc9-fce4-41f0-ba04-7232b6e647a7" (UID: "37742fc9-fce4-41f0-ba04-7232b6e647a7"). InnerVolumeSpecName "kube-api-access-lw6k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.617621 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw6k4\" (UniqueName: \"kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.617671 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.617687 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.643705 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.718156 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content\") pod \"a954daed-802a-4b46-81ef-7079dcddbaa5\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.718237 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pfbb\" (UniqueName: \"kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb\") pod \"a954daed-802a-4b46-81ef-7079dcddbaa5\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.718279 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities\") pod \"a954daed-802a-4b46-81ef-7079dcddbaa5\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.719183 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities" (OuterVolumeSpecName: "utilities") pod "a954daed-802a-4b46-81ef-7079dcddbaa5" (UID: "a954daed-802a-4b46-81ef-7079dcddbaa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.720962 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb" (OuterVolumeSpecName: "kube-api-access-6pfbb") pod "a954daed-802a-4b46-81ef-7079dcddbaa5" (UID: "a954daed-802a-4b46-81ef-7079dcddbaa5"). InnerVolumeSpecName "kube-api-access-6pfbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.742157 5008 generic.go:334] "Generic (PLEG): container finished" podID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerID="a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee" exitCode=0 Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.742325 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.742424 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerDied","Data":"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee"} Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.742490 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerDied","Data":"c7bb2d8d5dfc5bd460b51cbe8abe72fb7d9bc5d3e8c022f6997fb845b267cc34"} Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.742519 5008 scope.go:117] "RemoveContainer" containerID="a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.745909 5008 generic.go:334] "Generic (PLEG): container finished" podID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerID="a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024" exitCode=0 Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.745944 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerDied","Data":"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024"} Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.746152 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerDied","Data":"335be0a36e05771a7a88d81fee1b61fe29f073571f151738b87168e8e0776f1d"} Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.746223 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.772993 5008 scope.go:117] "RemoveContainer" containerID="3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.785001 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.793553 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.807335 5008 scope.go:117] "RemoveContainer" containerID="01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.819143 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pfbb\" (UniqueName: \"kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.819174 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.820376 5008 scope.go:117] "RemoveContainer" containerID="a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.820684 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee\": container with ID starting with a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee not found: ID does not exist" containerID="a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.820712 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee"} err="failed to get container status \"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee\": rpc error: code = NotFound desc = could not find container \"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee\": container with ID starting with a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.820735 5008 scope.go:117] "RemoveContainer" containerID="3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.821268 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277\": container with ID starting with 3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277 not found: ID does not exist" containerID="3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.821309 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277"} err="failed to get container status \"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277\": rpc error: code = NotFound desc = could not find container \"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277\": container with ID starting with 3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277 not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.821339 5008 scope.go:117] "RemoveContainer" containerID="01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.821677 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5\": container with ID starting with 01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5 not found: ID does not exist" containerID="01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.821739 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5"} err="failed to get container status \"01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5\": rpc error: code = NotFound desc = could not find container \"01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5\": container with ID starting with 01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5 not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.821800 5008 scope.go:117] "RemoveContainer" containerID="a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.831283 5008 scope.go:117] "RemoveContainer" containerID="20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.843504 5008 scope.go:117] "RemoveContainer" containerID="07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.855068 5008 scope.go:117] "RemoveContainer" containerID="a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.855460 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024\": container with ID starting with a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024 not found: ID does not exist" containerID="a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.855501 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024"} err="failed to get container status \"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024\": rpc error: code = NotFound desc = could not find container \"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024\": container with ID starting with a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024 not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.855541 5008 scope.go:117] "RemoveContainer" containerID="20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.855957 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528\": container with ID starting with 20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528 not found: ID does not exist" containerID="20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.856005 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528"} err="failed to get container status \"20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528\": rpc error: code = NotFound desc = could not find container \"20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528\": container with ID starting with 20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528 not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.856032 5008 scope.go:117] "RemoveContainer" containerID="07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.856308 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc\": container with ID starting with 07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc not found: ID does not exist" containerID="07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.856349 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc"} err="failed to get container status \"07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc\": rpc error: code = NotFound desc = could not find container \"07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc\": container with ID starting with 07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.884064 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a954daed-802a-4b46-81ef-7079dcddbaa5" (UID: "a954daed-802a-4b46-81ef-7079dcddbaa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.920549 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:06 crc kubenswrapper[5008]: I0129 15:34:06.084094 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:34:06 crc kubenswrapper[5008]: I0129 15:34:06.090151 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:34:07 crc kubenswrapper[5008]: I0129 15:34:07.338627 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" path="/var/lib/kubelet/pods/37742fc9-fce4-41f0-ba04-7232b6e647a7/volumes" Jan 29 15:34:07 crc kubenswrapper[5008]: I0129 15:34:07.340275 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" path="/var/lib/kubelet/pods/a954daed-802a-4b46-81ef-7079dcddbaa5/volumes" Jan 29 15:34:13 crc kubenswrapper[5008]: I0129 15:34:13.990647 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:34:13 crc kubenswrapper[5008]: I0129 15:34:13.992563 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.071014 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" containerID="cri-o://2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7" gracePeriod=15 Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.476025 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509154 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6586599bc4-dbtw8"] Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509445 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509462 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509473 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509481 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509489 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509496 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509538 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509546 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509554 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509561 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509576 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509607 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509619 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509627 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509637 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509646 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509655 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509683 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509693 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509701 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509847 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509882 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509899 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509965 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.510551 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.526873 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6586599bc4-dbtw8"] Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592702 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592742 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592764 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592808 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592825 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592870 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592890 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592916 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592936 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592956 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592977 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfxpn\" (UniqueName: \"kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593008 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593027 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593043 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593194 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593215 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-error\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593231 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz5jz\" (UniqueName: \"kubernetes.io/projected/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-kube-api-access-lz5jz\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593249 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593266 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593294 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593309 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593334 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593349 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-policies\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593364 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593381 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593397 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-dir\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593418 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-session\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593437 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-login\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593613 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593625 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.594849 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.600658 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.600878 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.601424 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn" (OuterVolumeSpecName: "kube-api-access-xfxpn") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "kube-api-access-xfxpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.602744 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.603115 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.603378 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.603541 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.605093 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.606218 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.607460 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.614262 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.694835 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-session\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.694892 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-login\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.694950 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-error\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.694974 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.694996 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz5jz\" (UniqueName: \"kubernetes.io/projected/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-kube-api-access-lz5jz\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695033 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695059 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695101 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695121 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695158 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695181 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-policies\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695205 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695227 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695248 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-dir\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695302 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695316 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695330 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695343 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695357 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695370 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695382 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695395 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695408 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695422 5008 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695435 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfxpn\" (UniqueName: \"kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695449 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695461 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695475 5008 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695515 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-dir\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.696931 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.696926 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-policies\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.698040 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.698045 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-login\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.698738 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.700335 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.700418 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.700695 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-error\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.702021 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.704503 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.705137 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.709575 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-session\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.712948 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz5jz\" (UniqueName: \"kubernetes.io/projected/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-kube-api-access-lz5jz\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.833440 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.022240 5008 generic.go:334] "Generic (PLEG): container finished" podID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerID="2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7" exitCode=0 Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.022453 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" event={"ID":"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1","Type":"ContainerDied","Data":"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7"} Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.022657 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" event={"ID":"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1","Type":"ContainerDied","Data":"06359078d405bd0e54235a406ebdf31eea4653e6c329abc798e56c3dfc469667"} Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.022725 5008 scope.go:117] "RemoveContainer" containerID="2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7" Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.022535 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.046939 5008 scope.go:117] "RemoveContainer" containerID="2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7" Jan 29 15:34:30 crc kubenswrapper[5008]: E0129 15:34:30.047495 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7\": container with ID starting with 2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7 not found: ID does not exist" containerID="2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7" Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.048167 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7"} err="failed to get container status \"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7\": rpc error: code = NotFound desc = could not find container \"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7\": container with ID starting with 2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7 not found: ID does not exist" Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.064708 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.064752 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.083110 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6586599bc4-dbtw8"] Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.029248 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" event={"ID":"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca","Type":"ContainerStarted","Data":"5d4dd487c926696523442afdcba3dcf59ce21fd22ceb8ff6d4be8453a2851820"} Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.029515 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" event={"ID":"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca","Type":"ContainerStarted","Data":"1ec1133f12712a69bb2b3eb98694534f34af6427fd961001ad600a0cdab82fcc"} Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.030898 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.037557 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.056408 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" podStartSLOduration=27.056386138 podStartE2EDuration="27.056386138s" podCreationTimestamp="2026-01-29 15:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:31.054463032 +0000 UTC m=+414.727317289" watchObservedRunningTime="2026-01-29 15:34:31.056386138 +0000 UTC m=+414.729240395" Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.329208 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" path="/var/lib/kubelet/pods/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1/volumes" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.812832 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nppsr"] Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.813954 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.840007 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nppsr"] Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937469 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnh6v\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-kube-api-access-mnh6v\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937536 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-bound-sa-token\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937571 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-trusted-ca\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937593 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-tls\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937675 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937726 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937820 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-certificates\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937872 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.963234 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.038991 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039068 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-certificates\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039098 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039155 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnh6v\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-kube-api-access-mnh6v\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039187 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-bound-sa-token\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039212 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-trusted-ca\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039234 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-tls\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039764 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.040914 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-trusted-ca\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.041018 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-certificates\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.049719 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.050408 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-tls\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.060180 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-bound-sa-token\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.063460 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnh6v\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-kube-api-access-mnh6v\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.129733 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.378917 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nppsr"] Jan 29 15:34:34 crc kubenswrapper[5008]: I0129 15:34:34.044617 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" event={"ID":"13cb1565-085a-43d5-8c2c-8bc9ad134dbd","Type":"ContainerStarted","Data":"0bfa8fceab34a99c5661ce181db26eb15e0ddc6f70e78329eb85ff451fdb0e4a"} Jan 29 15:34:34 crc kubenswrapper[5008]: I0129 15:34:34.044919 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" event={"ID":"13cb1565-085a-43d5-8c2c-8bc9ad134dbd","Type":"ContainerStarted","Data":"dc226af586b75e70cbacda6f5c41b494753d9968ce8d8bc01f319a9ebc77ecc3"} Jan 29 15:34:34 crc kubenswrapper[5008]: I0129 15:34:34.045637 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:34 crc kubenswrapper[5008]: I0129 15:34:34.060122 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" podStartSLOduration=2.060102379 podStartE2EDuration="2.060102379s" podCreationTimestamp="2026-01-29 15:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:34.058680615 +0000 UTC m=+417.731534862" watchObservedRunningTime="2026-01-29 15:34:34.060102379 +0000 UTC m=+417.732956626" Jan 29 15:34:36 crc kubenswrapper[5008]: I0129 15:34:36.835942 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:34:36 crc kubenswrapper[5008]: I0129 15:34:36.836541 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" podUID="93ec6db8-09a1-4b3b-900d-867f728452cb" containerName="controller-manager" containerID="cri-o://2e22995b163eebe80e37c0570ab875dae72b5630c85b948dc8057b5763467b37" gracePeriod=30 Jan 29 15:34:36 crc kubenswrapper[5008]: I0129 15:34:36.869474 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:34:36 crc kubenswrapper[5008]: I0129 15:34:36.869702 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" podUID="9ffb7e45-37e9-49cf-981c-d88916bba44b" containerName="route-controller-manager" containerID="cri-o://c5a81b7d6a5eb5b94e027d72a4da3dbb692c825c9c6bd8260d78e97a8e3f3e2b" gracePeriod=30 Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.068778 5008 generic.go:334] "Generic (PLEG): container finished" podID="93ec6db8-09a1-4b3b-900d-867f728452cb" containerID="2e22995b163eebe80e37c0570ab875dae72b5630c85b948dc8057b5763467b37" exitCode=0 Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.068868 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" event={"ID":"93ec6db8-09a1-4b3b-900d-867f728452cb","Type":"ContainerDied","Data":"2e22995b163eebe80e37c0570ab875dae72b5630c85b948dc8057b5763467b37"} Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.070480 5008 generic.go:334] "Generic (PLEG): container finished" podID="9ffb7e45-37e9-49cf-981c-d88916bba44b" containerID="c5a81b7d6a5eb5b94e027d72a4da3dbb692c825c9c6bd8260d78e97a8e3f3e2b" exitCode=0 Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.070513 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" event={"ID":"9ffb7e45-37e9-49cf-981c-d88916bba44b","Type":"ContainerDied","Data":"c5a81b7d6a5eb5b94e027d72a4da3dbb692c825c9c6bd8260d78e97a8e3f3e2b"} Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.310230 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.315955 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399415 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert\") pod \"9ffb7e45-37e9-49cf-981c-d88916bba44b\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399504 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcvpb\" (UniqueName: \"kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb\") pod \"9ffb7e45-37e9-49cf-981c-d88916bba44b\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399531 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles\") pod \"93ec6db8-09a1-4b3b-900d-867f728452cb\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399561 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca\") pod \"9ffb7e45-37e9-49cf-981c-d88916bba44b\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399586 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config\") pod \"93ec6db8-09a1-4b3b-900d-867f728452cb\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399611 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca\") pod \"93ec6db8-09a1-4b3b-900d-867f728452cb\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399680 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert\") pod \"93ec6db8-09a1-4b3b-900d-867f728452cb\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399741 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config\") pod \"9ffb7e45-37e9-49cf-981c-d88916bba44b\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399770 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dz6b\" (UniqueName: \"kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b\") pod \"93ec6db8-09a1-4b3b-900d-867f728452cb\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.400576 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "93ec6db8-09a1-4b3b-900d-867f728452cb" (UID: "93ec6db8-09a1-4b3b-900d-867f728452cb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.400666 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config" (OuterVolumeSpecName: "config") pod "93ec6db8-09a1-4b3b-900d-867f728452cb" (UID: "93ec6db8-09a1-4b3b-900d-867f728452cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.400744 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca" (OuterVolumeSpecName: "client-ca") pod "9ffb7e45-37e9-49cf-981c-d88916bba44b" (UID: "9ffb7e45-37e9-49cf-981c-d88916bba44b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.401503 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "93ec6db8-09a1-4b3b-900d-867f728452cb" (UID: "93ec6db8-09a1-4b3b-900d-867f728452cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.402155 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config" (OuterVolumeSpecName: "config") pod "9ffb7e45-37e9-49cf-981c-d88916bba44b" (UID: "9ffb7e45-37e9-49cf-981c-d88916bba44b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.405012 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "93ec6db8-09a1-4b3b-900d-867f728452cb" (UID: "93ec6db8-09a1-4b3b-900d-867f728452cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.405008 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9ffb7e45-37e9-49cf-981c-d88916bba44b" (UID: "9ffb7e45-37e9-49cf-981c-d88916bba44b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.405219 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b" (OuterVolumeSpecName: "kube-api-access-5dz6b") pod "93ec6db8-09a1-4b3b-900d-867f728452cb" (UID: "93ec6db8-09a1-4b3b-900d-867f728452cb"). InnerVolumeSpecName "kube-api-access-5dz6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.408255 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb" (OuterVolumeSpecName: "kube-api-access-dcvpb") pod "9ffb7e45-37e9-49cf-981c-d88916bba44b" (UID: "9ffb7e45-37e9-49cf-981c-d88916bba44b"). InnerVolumeSpecName "kube-api-access-dcvpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501629 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501665 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501674 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501682 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501690 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501698 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dz6b\" (UniqueName: \"kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501708 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501715 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcvpb\" (UniqueName: \"kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501723 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.077854 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.077866 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" event={"ID":"9ffb7e45-37e9-49cf-981c-d88916bba44b","Type":"ContainerDied","Data":"6390f3c64efd012633ef552d358d0db88be60b32e8bc4b6efb83125ea4fe673d"} Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.077995 5008 scope.go:117] "RemoveContainer" containerID="c5a81b7d6a5eb5b94e027d72a4da3dbb692c825c9c6bd8260d78e97a8e3f3e2b" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.079690 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" event={"ID":"93ec6db8-09a1-4b3b-900d-867f728452cb","Type":"ContainerDied","Data":"5887ca4850db20b4f0627a5f2b1d2ee4799a7e5d8d086bbb5ed85795193b59c4"} Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.079739 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.092754 5008 scope.go:117] "RemoveContainer" containerID="2e22995b163eebe80e37c0570ab875dae72b5630c85b948dc8057b5763467b37" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.119186 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.132524 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.135954 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.138697 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.220240 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q"] Jan 29 15:34:38 crc kubenswrapper[5008]: E0129 15:34:38.220834 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ffb7e45-37e9-49cf-981c-d88916bba44b" containerName="route-controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.220904 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ffb7e45-37e9-49cf-981c-d88916bba44b" containerName="route-controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: E0129 15:34:38.220923 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ec6db8-09a1-4b3b-900d-867f728452cb" containerName="controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.220936 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ec6db8-09a1-4b3b-900d-867f728452cb" containerName="controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.221181 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ffb7e45-37e9-49cf-981c-d88916bba44b" containerName="route-controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.221213 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ec6db8-09a1-4b3b-900d-867f728452cb" containerName="controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.221876 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.224351 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.224675 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.224853 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.224975 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.225104 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.227731 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.229804 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-l7rrp"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.230912 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.233303 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.235128 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.235231 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.235445 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.235682 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.236285 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.238452 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.240722 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-l7rrp"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.251543 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312422 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312495 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-config\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312569 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f49e52-7a77-4c24-8bad-f171e4278f8e-serving-cert\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312720 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abe2571-fd60-4224-b5f9-8f0b501c14ce-serving-cert\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312763 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-config\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312880 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-client-ca\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312907 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhvh8\" (UniqueName: \"kubernetes.io/projected/1abe2571-fd60-4224-b5f9-8f0b501c14ce-kube-api-access-hhvh8\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312966 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppp9t\" (UniqueName: \"kubernetes.io/projected/f1f49e52-7a77-4c24-8bad-f171e4278f8e-kube-api-access-ppp9t\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.313044 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-client-ca\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414057 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-client-ca\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414142 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414173 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-config\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414220 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f49e52-7a77-4c24-8bad-f171e4278f8e-serving-cert\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414257 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abe2571-fd60-4224-b5f9-8f0b501c14ce-serving-cert\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414277 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-config\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414305 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-client-ca\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414329 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhvh8\" (UniqueName: \"kubernetes.io/projected/1abe2571-fd60-4224-b5f9-8f0b501c14ce-kube-api-access-hhvh8\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414358 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppp9t\" (UniqueName: \"kubernetes.io/projected/f1f49e52-7a77-4c24-8bad-f171e4278f8e-kube-api-access-ppp9t\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.415955 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-client-ca\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.416012 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-client-ca\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.416124 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-config\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.416997 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.417268 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-config\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.418837 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f49e52-7a77-4c24-8bad-f171e4278f8e-serving-cert\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.420608 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abe2571-fd60-4224-b5f9-8f0b501c14ce-serving-cert\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.430519 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppp9t\" (UniqueName: \"kubernetes.io/projected/f1f49e52-7a77-4c24-8bad-f171e4278f8e-kube-api-access-ppp9t\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.438002 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhvh8\" (UniqueName: \"kubernetes.io/projected/1abe2571-fd60-4224-b5f9-8f0b501c14ce-kube-api-access-hhvh8\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.538087 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.548665 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.002289 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q"] Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.032244 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-l7rrp"] Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.086589 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" event={"ID":"f1f49e52-7a77-4c24-8bad-f171e4278f8e","Type":"ContainerStarted","Data":"20e96f50276ca61b255e3eb4e2c3bc5077a19ae95d30e49d861f92b90e75d823"} Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.088631 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" event={"ID":"1abe2571-fd60-4224-b5f9-8f0b501c14ce","Type":"ContainerStarted","Data":"4cf156e56fbe299eda1af33ba6c8769ef1c1f138ff085729ee3ea93b39c940c3"} Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.334056 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ec6db8-09a1-4b3b-900d-867f728452cb" path="/var/lib/kubelet/pods/93ec6db8-09a1-4b3b-900d-867f728452cb/volumes" Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.334987 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ffb7e45-37e9-49cf-981c-d88916bba44b" path="/var/lib/kubelet/pods/9ffb7e45-37e9-49cf-981c-d88916bba44b/volumes" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.098239 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" event={"ID":"1abe2571-fd60-4224-b5f9-8f0b501c14ce","Type":"ContainerStarted","Data":"7b4ee8befe1d6476025015239b2664d7522cb29aa77dd77ee357198cbfb8cbff"} Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.099475 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.101347 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" event={"ID":"f1f49e52-7a77-4c24-8bad-f171e4278f8e","Type":"ContainerStarted","Data":"b4fe2770368607a122e538b36ba804d4530925293570df5539a68243dd02da22"} Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.101765 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.108136 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.113834 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.122606 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" podStartSLOduration=4.12258642 podStartE2EDuration="4.12258642s" podCreationTimestamp="2026-01-29 15:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:40.121961444 +0000 UTC m=+423.794815691" watchObservedRunningTime="2026-01-29 15:34:40.12258642 +0000 UTC m=+423.795440667" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.143559 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" podStartSLOduration=4.143538362 podStartE2EDuration="4.143538362s" podCreationTimestamp="2026-01-29 15:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:40.138242555 +0000 UTC m=+423.811096802" watchObservedRunningTime="2026-01-29 15:34:40.143538362 +0000 UTC m=+423.816392619" Jan 29 15:34:43 crc kubenswrapper[5008]: I0129 15:34:43.990587 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:34:43 crc kubenswrapper[5008]: I0129 15:34:43.991158 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:34:43 crc kubenswrapper[5008]: I0129 15:34:43.991241 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:34:43 crc kubenswrapper[5008]: I0129 15:34:43.992168 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:34:43 crc kubenswrapper[5008]: I0129 15:34:43.992295 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521" gracePeriod=600 Jan 29 15:34:44 crc kubenswrapper[5008]: I0129 15:34:44.127999 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521" exitCode=0 Jan 29 15:34:44 crc kubenswrapper[5008]: I0129 15:34:44.128049 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521"} Jan 29 15:34:44 crc kubenswrapper[5008]: I0129 15:34:44.128105 5008 scope.go:117] "RemoveContainer" containerID="b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731" Jan 29 15:34:45 crc kubenswrapper[5008]: I0129 15:34:45.137513 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab"} Jan 29 15:34:53 crc kubenswrapper[5008]: I0129 15:34:53.138486 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:53 crc kubenswrapper[5008]: I0129 15:34:53.186021 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.653316 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.654632 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cwgw5" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="registry-server" containerID="cri-o://fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f" gracePeriod=30 Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.660011 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.661007 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4dwdf" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="registry-server" containerID="cri-o://f602032356e6af24b6539dc335606faed034c76d076edd55de00a1f6423d0579" gracePeriod=30 Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.694993 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.695852 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" containerID="cri-o://8d7598ad2c3c5a660fb19d3ee369a6710759e6bbe8cbe47b3f02e5b7530f821c" gracePeriod=30 Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.708867 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.709102 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mkxw5" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="registry-server" containerID="cri-o://ed3317e50ebd56908f1ad0d5cbc15af6b8fc520caee4385415a1615527ccd62b" gracePeriod=30 Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.717112 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pz9kz"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.719997 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.726219 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.726480 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tst9c" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="registry-server" containerID="cri-o://9c3f342d019c4b99216e2db36a8519922ee184a93aa73ddc5f5e324d243d11e6" gracePeriod=30 Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.733379 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pz9kz"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.844827 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkr9j\" (UniqueName: \"kubernetes.io/projected/077a9343-695d-4180-9255-41f1eaeb58a3-kube-api-access-gkr9j\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.845229 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.845287 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.946166 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.946224 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkr9j\" (UniqueName: \"kubernetes.io/projected/077a9343-695d-4180-9255-41f1eaeb58a3-kube-api-access-gkr9j\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.946259 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.947290 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.954590 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.995604 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkr9j\" (UniqueName: \"kubernetes.io/projected/077a9343-695d-4180-9255-41f1eaeb58a3-kube-api-access-gkr9j\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.047450 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.245590 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.246963 5008 generic.go:334] "Generic (PLEG): container finished" podID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerID="f602032356e6af24b6539dc335606faed034c76d076edd55de00a1f6423d0579" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.247029 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerDied","Data":"f602032356e6af24b6539dc335606faed034c76d076edd55de00a1f6423d0579"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.248879 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerID="ed3317e50ebd56908f1ad0d5cbc15af6b8fc520caee4385415a1615527ccd62b" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.248959 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerDied","Data":"ed3317e50ebd56908f1ad0d5cbc15af6b8fc520caee4385415a1615527ccd62b"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.250797 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerID="fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.250858 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerDied","Data":"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.250863 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.250877 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerDied","Data":"54d6cf905ba0c9c55baea0b1bbde4338656f4661c2571ae702fdc0067f3ef4cb"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.250897 5008 scope.go:117] "RemoveContainer" containerID="fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.254009 5008 generic.go:334] "Generic (PLEG): container finished" podID="7473d665-3627-4470-a820-ebdbdc113587" containerID="8d7598ad2c3c5a660fb19d3ee369a6710759e6bbe8cbe47b3f02e5b7530f821c" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.254166 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" event={"ID":"7473d665-3627-4470-a820-ebdbdc113587","Type":"ContainerDied","Data":"8d7598ad2c3c5a660fb19d3ee369a6710759e6bbe8cbe47b3f02e5b7530f821c"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.273045 5008 scope.go:117] "RemoveContainer" containerID="b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.273109 5008 generic.go:334] "Generic (PLEG): container finished" podID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerID="9c3f342d019c4b99216e2db36a8519922ee184a93aa73ddc5f5e324d243d11e6" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.273129 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerDied","Data":"9c3f342d019c4b99216e2db36a8519922ee184a93aa73ddc5f5e324d243d11e6"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.304999 5008 scope.go:117] "RemoveContainer" containerID="f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.337474 5008 scope.go:117] "RemoveContainer" containerID="fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f" Jan 29 15:35:02 crc kubenswrapper[5008]: E0129 15:35:02.342302 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f\": container with ID starting with fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f not found: ID does not exist" containerID="fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.342365 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f"} err="failed to get container status \"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f\": rpc error: code = NotFound desc = could not find container \"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f\": container with ID starting with fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f not found: ID does not exist" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.342397 5008 scope.go:117] "RemoveContainer" containerID="b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869" Jan 29 15:35:02 crc kubenswrapper[5008]: E0129 15:35:02.342834 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869\": container with ID starting with b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869 not found: ID does not exist" containerID="b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.342856 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869"} err="failed to get container status \"b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869\": rpc error: code = NotFound desc = could not find container \"b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869\": container with ID starting with b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869 not found: ID does not exist" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.342872 5008 scope.go:117] "RemoveContainer" containerID="f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b" Jan 29 15:35:02 crc kubenswrapper[5008]: E0129 15:35:02.344987 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b\": container with ID starting with f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b not found: ID does not exist" containerID="f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.345033 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b"} err="failed to get container status \"f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b\": rpc error: code = NotFound desc = could not find container \"f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b\": container with ID starting with f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b not found: ID does not exist" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.350319 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content\") pod \"6aebe040-289b-48c1-a825-f12b471a5ad6\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.350423 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dldqp\" (UniqueName: \"kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp\") pod \"6aebe040-289b-48c1-a825-f12b471a5ad6\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.350495 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities\") pod \"6aebe040-289b-48c1-a825-f12b471a5ad6\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.351544 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities" (OuterVolumeSpecName: "utilities") pod "6aebe040-289b-48c1-a825-f12b471a5ad6" (UID: "6aebe040-289b-48c1-a825-f12b471a5ad6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.355963 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp" (OuterVolumeSpecName: "kube-api-access-dldqp") pod "6aebe040-289b-48c1-a825-f12b471a5ad6" (UID: "6aebe040-289b-48c1-a825-f12b471a5ad6"). InnerVolumeSpecName "kube-api-access-dldqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.410859 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aebe040-289b-48c1-a825-f12b471a5ad6" (UID: "6aebe040-289b-48c1-a825-f12b471a5ad6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.452574 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.452899 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.453180 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dldqp\" (UniqueName: \"kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.459203 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.467699 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.487662 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.524104 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.556500 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities\") pod \"ea8deba9-72cb-4274-add1-e80591a9e7cc\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.556812 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics\") pod \"7473d665-3627-4470-a820-ebdbdc113587\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.556998 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities\") pod \"6aef1830-577d-405c-bb54-6f9fe217ae86\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557091 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2kqn\" (UniqueName: \"kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn\") pod \"7473d665-3627-4470-a820-ebdbdc113587\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557183 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content\") pod \"6aef1830-577d-405c-bb54-6f9fe217ae86\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557287 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftbd9\" (UniqueName: \"kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9\") pod \"6aef1830-577d-405c-bb54-6f9fe217ae86\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557387 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca\") pod \"7473d665-3627-4470-a820-ebdbdc113587\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557466 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities\") pod \"d2d42845-cca1-4b60-bc84-4b2baebf702b\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557604 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content\") pod \"ea8deba9-72cb-4274-add1-e80591a9e7cc\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557743 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-229kp\" (UniqueName: \"kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp\") pod \"ea8deba9-72cb-4274-add1-e80591a9e7cc\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557909 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8q2q\" (UniqueName: \"kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q\") pod \"d2d42845-cca1-4b60-bc84-4b2baebf702b\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558043 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content\") pod \"d2d42845-cca1-4b60-bc84-4b2baebf702b\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557196 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities" (OuterVolumeSpecName: "utilities") pod "ea8deba9-72cb-4274-add1-e80591a9e7cc" (UID: "ea8deba9-72cb-4274-add1-e80591a9e7cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558219 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7473d665-3627-4470-a820-ebdbdc113587" (UID: "7473d665-3627-4470-a820-ebdbdc113587"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558194 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities" (OuterVolumeSpecName: "utilities") pod "d2d42845-cca1-4b60-bc84-4b2baebf702b" (UID: "d2d42845-cca1-4b60-bc84-4b2baebf702b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558382 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities" (OuterVolumeSpecName: "utilities") pod "6aef1830-577d-405c-bb54-6f9fe217ae86" (UID: "6aef1830-577d-405c-bb54-6f9fe217ae86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558648 5008 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558746 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558857 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558950 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.588103 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aef1830-577d-405c-bb54-6f9fe217ae86" (UID: "6aef1830-577d-405c-bb54-6f9fe217ae86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.605487 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9" (OuterVolumeSpecName: "kube-api-access-ftbd9") pod "6aef1830-577d-405c-bb54-6f9fe217ae86" (UID: "6aef1830-577d-405c-bb54-6f9fe217ae86"). InnerVolumeSpecName "kube-api-access-ftbd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.605594 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q" (OuterVolumeSpecName: "kube-api-access-s8q2q") pod "d2d42845-cca1-4b60-bc84-4b2baebf702b" (UID: "d2d42845-cca1-4b60-bc84-4b2baebf702b"). InnerVolumeSpecName "kube-api-access-s8q2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.608222 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn" (OuterVolumeSpecName: "kube-api-access-l2kqn") pod "7473d665-3627-4470-a820-ebdbdc113587" (UID: "7473d665-3627-4470-a820-ebdbdc113587"). InnerVolumeSpecName "kube-api-access-l2kqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.608676 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp" (OuterVolumeSpecName: "kube-api-access-229kp") pod "ea8deba9-72cb-4274-add1-e80591a9e7cc" (UID: "ea8deba9-72cb-4274-add1-e80591a9e7cc"). InnerVolumeSpecName "kube-api-access-229kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.608828 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7473d665-3627-4470-a820-ebdbdc113587" (UID: "7473d665-3627-4470-a820-ebdbdc113587"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.621314 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.621367 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2d42845-cca1-4b60-bc84-4b2baebf702b" (UID: "d2d42845-cca1-4b60-bc84-4b2baebf702b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.626149 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.659999 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pz9kz"] Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660702 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-229kp\" (UniqueName: \"kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660724 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8q2q\" (UniqueName: \"kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660737 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660748 5008 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660762 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2kqn\" (UniqueName: \"kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660774 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660804 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftbd9\" (UniqueName: \"kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.702960 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea8deba9-72cb-4274-add1-e80591a9e7cc" (UID: "ea8deba9-72cb-4274-add1-e80591a9e7cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.763104 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.280294 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" event={"ID":"7473d665-3627-4470-a820-ebdbdc113587","Type":"ContainerDied","Data":"744d2c5b14b18a0366937cb219697ae3c655391e7942e7c446395ce7d6b803ff"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.280569 5008 scope.go:117] "RemoveContainer" containerID="8d7598ad2c3c5a660fb19d3ee369a6710759e6bbe8cbe47b3f02e5b7530f821c" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.280302 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.282825 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerDied","Data":"add0ef656328b3411c8246a1cffa7e2baeefc91f711bf33d67c37a176e10eb38"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.282893 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.288300 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" event={"ID":"077a9343-695d-4180-9255-41f1eaeb58a3","Type":"ContainerStarted","Data":"8c5780bdf73732a664202a63403be5237694bfd4cb9a15e445217aa18813d668"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.288324 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" event={"ID":"077a9343-695d-4180-9255-41f1eaeb58a3","Type":"ContainerStarted","Data":"ef7302fd31879c7584c1dc343696e2b136034cd38c8da40fd9a6416da7664dc8"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.289529 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.293366 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerDied","Data":"dd8d6696ceba57808730ee9b74baad13f0f3efae19998fb92ff0c2c357522c56"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.293539 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.294733 5008 scope.go:117] "RemoveContainer" containerID="9c3f342d019c4b99216e2db36a8519922ee184a93aa73ddc5f5e324d243d11e6" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.306557 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerDied","Data":"57f282b94968e79e724bd40448547c7c110b5b3c35e9677aea1eb21b270ed1d9"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.306606 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.316446 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.322342 5008 scope.go:117] "RemoveContainer" containerID="c66762f5da3eb3376b4ceceb433da1a00c15c72c9c525f47d7d7528bad62fea4" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.328144 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" podStartSLOduration=2.328117761 podStartE2EDuration="2.328117761s" podCreationTimestamp="2026-01-29 15:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:35:03.310630303 +0000 UTC m=+446.983484540" watchObservedRunningTime="2026-01-29 15:35:03.328117761 +0000 UTC m=+447.000971998" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.347295 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" path="/var/lib/kubelet/pods/6aebe040-289b-48c1-a825-f12b471a5ad6/volumes" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.348341 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.348385 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.350893 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.353848 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.368173 5008 scope.go:117] "RemoveContainer" containerID="4b51ccd27d29592df8a7bede95816e1b7ee7978e1541458bdd34bb868c6e0912" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.384858 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.387912 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.393976 5008 scope.go:117] "RemoveContainer" containerID="f602032356e6af24b6539dc335606faed034c76d076edd55de00a1f6423d0579" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.394537 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.410216 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.417271 5008 scope.go:117] "RemoveContainer" containerID="5ef6720d337e6b7bdd09776b3452601c072f482c35a5a9e55c34041df49ba20b" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.440237 5008 scope.go:117] "RemoveContainer" containerID="62b0c01ef29dcd7c7957aa7b9fba8ee02c41e66ab0221b57ac7769babd464e8c" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.454814 5008 scope.go:117] "RemoveContainer" containerID="ed3317e50ebd56908f1ad0d5cbc15af6b8fc520caee4385415a1615527ccd62b" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.475343 5008 scope.go:117] "RemoveContainer" containerID="6fbbb1c70108b41582b5edef8de3a67424fd51168b22d0d1f5469f11eceefd27" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.486238 5008 scope.go:117] "RemoveContainer" containerID="b4ed1901a1ac7d83b698c4d263db5514ae2a4bf0aab0e1f9032c155913f5bd2d" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871142 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nd64n"] Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.871694 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871717 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.871765 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871821 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.871838 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871851 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.871874 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871927 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.871946 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871958 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872012 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872027 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872052 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872064 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872124 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872136 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872149 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872199 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872214 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872226 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872245 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872296 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872317 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872329 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872345 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872396 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872608 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872630 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872655 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872678 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872692 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.873920 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.876617 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.880838 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd64n"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.982476 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-catalog-content\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.982594 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fv49\" (UniqueName: \"kubernetes.io/projected/1babb539-12b9-4532-b9c3-bc165829c40e-kube-api-access-8fv49\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.982633 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-utilities\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.071107 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5g5wg"] Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.072044 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.073873 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.085409 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fv49\" (UniqueName: \"kubernetes.io/projected/1babb539-12b9-4532-b9c3-bc165829c40e-kube-api-access-8fv49\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.085457 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-utilities\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.085519 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-catalog-content\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.085963 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-catalog-content\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.086502 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-utilities\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.087313 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5g5wg"] Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.112802 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fv49\" (UniqueName: \"kubernetes.io/projected/1babb539-12b9-4532-b9c3-bc165829c40e-kube-api-access-8fv49\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.186839 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-utilities\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.186899 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-catalog-content\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.187102 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p92fr\" (UniqueName: \"kubernetes.io/projected/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-kube-api-access-p92fr\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.202659 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.288470 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-utilities\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.288515 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-catalog-content\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.288562 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p92fr\" (UniqueName: \"kubernetes.io/projected/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-kube-api-access-p92fr\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.289200 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-utilities\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.289329 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-catalog-content\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.310978 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p92fr\" (UniqueName: \"kubernetes.io/projected/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-kube-api-access-p92fr\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.405375 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.586498 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd64n"] Jan 29 15:35:04 crc kubenswrapper[5008]: W0129 15:35:04.600408 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1babb539_12b9_4532_b9c3_bc165829c40e.slice/crio-e37689380c3622f490c954f7fb007fa6de6d35e480891ae38bd2f40c0a5d14c2 WatchSource:0}: Error finding container e37689380c3622f490c954f7fb007fa6de6d35e480891ae38bd2f40c0a5d14c2: Status 404 returned error can't find the container with id e37689380c3622f490c954f7fb007fa6de6d35e480891ae38bd2f40c0a5d14c2 Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.808858 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5g5wg"] Jan 29 15:35:04 crc kubenswrapper[5008]: W0129 15:35:04.815956 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fbd5270_4a24_47ba_a0cf_0c3382a833c0.slice/crio-f47cdc7022ed4732fc406cdc2a4cd2a094585fb848f3cc3f166c35c3b35b744c WatchSource:0}: Error finding container f47cdc7022ed4732fc406cdc2a4cd2a094585fb848f3cc3f166c35c3b35b744c: Status 404 returned error can't find the container with id f47cdc7022ed4732fc406cdc2a4cd2a094585fb848f3cc3f166c35c3b35b744c Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.330758 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" path="/var/lib/kubelet/pods/6aef1830-577d-405c-bb54-6f9fe217ae86/volumes" Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.331421 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7473d665-3627-4470-a820-ebdbdc113587" path="/var/lib/kubelet/pods/7473d665-3627-4470-a820-ebdbdc113587/volumes" Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.331854 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" path="/var/lib/kubelet/pods/d2d42845-cca1-4b60-bc84-4b2baebf702b/volumes" Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.332380 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" path="/var/lib/kubelet/pods/ea8deba9-72cb-4274-add1-e80591a9e7cc/volumes" Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.336765 5008 generic.go:334] "Generic (PLEG): container finished" podID="1babb539-12b9-4532-b9c3-bc165829c40e" containerID="c71dddbb50bddf0961ab298e304c65bedc0bbf44cbca0140b51d704f99e7773a" exitCode=0 Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.336811 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd64n" event={"ID":"1babb539-12b9-4532-b9c3-bc165829c40e","Type":"ContainerDied","Data":"c71dddbb50bddf0961ab298e304c65bedc0bbf44cbca0140b51d704f99e7773a"} Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.336841 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd64n" event={"ID":"1babb539-12b9-4532-b9c3-bc165829c40e","Type":"ContainerStarted","Data":"e37689380c3622f490c954f7fb007fa6de6d35e480891ae38bd2f40c0a5d14c2"} Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.341078 5008 generic.go:334] "Generic (PLEG): container finished" podID="5fbd5270-4a24-47ba-a0cf-0c3382a833c0" containerID="2ee95d7903e3576a4d9a678fd50e6ad9cbd147b2b509919775ff7bee59a15d44" exitCode=0 Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.341179 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5g5wg" event={"ID":"5fbd5270-4a24-47ba-a0cf-0c3382a833c0","Type":"ContainerDied","Data":"2ee95d7903e3576a4d9a678fd50e6ad9cbd147b2b509919775ff7bee59a15d44"} Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.341209 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5g5wg" event={"ID":"5fbd5270-4a24-47ba-a0cf-0c3382a833c0","Type":"ContainerStarted","Data":"f47cdc7022ed4732fc406cdc2a4cd2a094585fb848f3cc3f166c35c3b35b744c"} Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.269912 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l2shr"] Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.271667 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.277088 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.279009 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2shr"] Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.417760 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xstw\" (UniqueName: \"kubernetes.io/projected/6263e09b-1d9a-4833-851b-1cb8c8132dfe-kube-api-access-8xstw\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.418034 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-catalog-content\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.418095 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-utilities\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.467091 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5br4h"] Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.468385 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.472818 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.477063 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5br4h"] Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519559 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-catalog-content\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519598 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-utilities\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519634 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-utilities\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519670 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5zx\" (UniqueName: \"kubernetes.io/projected/b4517208-d057-4652-a3c2-fb8374a45a04-kube-api-access-9d5zx\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519704 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-catalog-content\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519756 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xstw\" (UniqueName: \"kubernetes.io/projected/6263e09b-1d9a-4833-851b-1cb8c8132dfe-kube-api-access-8xstw\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.520026 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-catalog-content\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.520298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-utilities\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.538023 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xstw\" (UniqueName: \"kubernetes.io/projected/6263e09b-1d9a-4833-851b-1cb8c8132dfe-kube-api-access-8xstw\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.620564 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-utilities\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.620601 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5zx\" (UniqueName: \"kubernetes.io/projected/b4517208-d057-4652-a3c2-fb8374a45a04-kube-api-access-9d5zx\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.620624 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-catalog-content\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.621009 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-catalog-content\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.621120 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-utilities\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.626206 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.637755 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5zx\" (UniqueName: \"kubernetes.io/projected/b4517208-d057-4652-a3c2-fb8374a45a04-kube-api-access-9d5zx\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.799860 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.092721 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2shr"] Jan 29 15:35:07 crc kubenswrapper[5008]: W0129 15:35:07.168768 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6263e09b_1d9a_4833_851b_1cb8c8132dfe.slice/crio-57480267075c1c6c1f32d424673d3bfff9181437427053dbebbc2ef55150cf47 WatchSource:0}: Error finding container 57480267075c1c6c1f32d424673d3bfff9181437427053dbebbc2ef55150cf47: Status 404 returned error can't find the container with id 57480267075c1c6c1f32d424673d3bfff9181437427053dbebbc2ef55150cf47 Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.224270 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5br4h"] Jan 29 15:35:07 crc kubenswrapper[5008]: W0129 15:35:07.247099 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4517208_d057_4652_a3c2_fb8374a45a04.slice/crio-4f6ce874b68e0fda76e6971e2fb915ac52044b317e68762759151427ef13befc WatchSource:0}: Error finding container 4f6ce874b68e0fda76e6971e2fb915ac52044b317e68762759151427ef13befc: Status 404 returned error can't find the container with id 4f6ce874b68e0fda76e6971e2fb915ac52044b317e68762759151427ef13befc Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.358385 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5g5wg" event={"ID":"5fbd5270-4a24-47ba-a0cf-0c3382a833c0","Type":"ContainerStarted","Data":"f6566b1a71bc154f8976359feb809c07d53908a590fb8fe1ffa6fa71bd415b5d"} Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.360814 5008 generic.go:334] "Generic (PLEG): container finished" podID="6263e09b-1d9a-4833-851b-1cb8c8132dfe" containerID="1ee50f343fac896f32e8426a9fca1830223d71004d0f941dac17e02272ea739e" exitCode=0 Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.361003 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2shr" event={"ID":"6263e09b-1d9a-4833-851b-1cb8c8132dfe","Type":"ContainerDied","Data":"1ee50f343fac896f32e8426a9fca1830223d71004d0f941dac17e02272ea739e"} Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.361122 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2shr" event={"ID":"6263e09b-1d9a-4833-851b-1cb8c8132dfe","Type":"ContainerStarted","Data":"57480267075c1c6c1f32d424673d3bfff9181437427053dbebbc2ef55150cf47"} Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.366139 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5br4h" event={"ID":"b4517208-d057-4652-a3c2-fb8374a45a04","Type":"ContainerStarted","Data":"4f6ce874b68e0fda76e6971e2fb915ac52044b317e68762759151427ef13befc"} Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.368029 5008 generic.go:334] "Generic (PLEG): container finished" podID="1babb539-12b9-4532-b9c3-bc165829c40e" containerID="6fad28f4bf1cf406958ddb55142116cd23f56d5096aa3407e95620ebb3a848e6" exitCode=0 Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.368086 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd64n" event={"ID":"1babb539-12b9-4532-b9c3-bc165829c40e","Type":"ContainerDied","Data":"6fad28f4bf1cf406958ddb55142116cd23f56d5096aa3407e95620ebb3a848e6"} Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.374165 5008 generic.go:334] "Generic (PLEG): container finished" podID="5fbd5270-4a24-47ba-a0cf-0c3382a833c0" containerID="f6566b1a71bc154f8976359feb809c07d53908a590fb8fe1ffa6fa71bd415b5d" exitCode=0 Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.374241 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5g5wg" event={"ID":"5fbd5270-4a24-47ba-a0cf-0c3382a833c0","Type":"ContainerDied","Data":"f6566b1a71bc154f8976359feb809c07d53908a590fb8fe1ffa6fa71bd415b5d"} Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.378204 5008 generic.go:334] "Generic (PLEG): container finished" podID="6263e09b-1d9a-4833-851b-1cb8c8132dfe" containerID="6516b380bd26c21d19906c2018f0fe0dc2208d3e146b72ecc659dc058365fb8a" exitCode=0 Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.378255 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2shr" event={"ID":"6263e09b-1d9a-4833-851b-1cb8c8132dfe","Type":"ContainerDied","Data":"6516b380bd26c21d19906c2018f0fe0dc2208d3e146b72ecc659dc058365fb8a"} Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.381292 5008 generic.go:334] "Generic (PLEG): container finished" podID="b4517208-d057-4652-a3c2-fb8374a45a04" containerID="3e99b0758cd250255cb957c3e3c8a726a0dcf68bdfc21fa22b16d16aec39c8cb" exitCode=0 Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.381340 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5br4h" event={"ID":"b4517208-d057-4652-a3c2-fb8374a45a04","Type":"ContainerDied","Data":"3e99b0758cd250255cb957c3e3c8a726a0dcf68bdfc21fa22b16d16aec39c8cb"} Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.385161 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd64n" event={"ID":"1babb539-12b9-4532-b9c3-bc165829c40e","Type":"ContainerStarted","Data":"089753bfd363a7f88b34666d0f0064b2c2b42df8b6e141620a4f9204ab79a2d9"} Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.426948 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nd64n" podStartSLOduration=2.711090651 podStartE2EDuration="5.426932765s" podCreationTimestamp="2026-01-29 15:35:03 +0000 UTC" firstStartedPulling="2026-01-29 15:35:05.338422633 +0000 UTC m=+449.011276870" lastFinishedPulling="2026-01-29 15:35:08.054264747 +0000 UTC m=+451.727118984" observedRunningTime="2026-01-29 15:35:08.411239099 +0000 UTC m=+452.084093336" watchObservedRunningTime="2026-01-29 15:35:08.426932765 +0000 UTC m=+452.099787002" Jan 29 15:35:09 crc kubenswrapper[5008]: I0129 15:35:09.392346 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5g5wg" event={"ID":"5fbd5270-4a24-47ba-a0cf-0c3382a833c0","Type":"ContainerStarted","Data":"c648c2606a395897d1776e33ddc545c7be11dafde7981802c30449fc687b5b1f"} Jan 29 15:35:09 crc kubenswrapper[5008]: I0129 15:35:09.395440 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2shr" event={"ID":"6263e09b-1d9a-4833-851b-1cb8c8132dfe","Type":"ContainerStarted","Data":"66815f6259390c53bbff0823dc258c97136d9ac4e32415a6f01a94831722e5ec"} Jan 29 15:35:09 crc kubenswrapper[5008]: I0129 15:35:09.422904 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5g5wg" podStartSLOduration=1.877726032 podStartE2EDuration="5.422887745s" podCreationTimestamp="2026-01-29 15:35:04 +0000 UTC" firstStartedPulling="2026-01-29 15:35:05.342393308 +0000 UTC m=+449.015247545" lastFinishedPulling="2026-01-29 15:35:08.887555021 +0000 UTC m=+452.560409258" observedRunningTime="2026-01-29 15:35:09.4226578 +0000 UTC m=+453.095512057" watchObservedRunningTime="2026-01-29 15:35:09.422887745 +0000 UTC m=+453.095741982" Jan 29 15:35:09 crc kubenswrapper[5008]: I0129 15:35:09.439574 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l2shr" podStartSLOduration=1.903850864 podStartE2EDuration="3.439560645s" podCreationTimestamp="2026-01-29 15:35:06 +0000 UTC" firstStartedPulling="2026-01-29 15:35:07.364526213 +0000 UTC m=+451.037380450" lastFinishedPulling="2026-01-29 15:35:08.900235994 +0000 UTC m=+452.573090231" observedRunningTime="2026-01-29 15:35:09.43850307 +0000 UTC m=+453.111357317" watchObservedRunningTime="2026-01-29 15:35:09.439560645 +0000 UTC m=+453.112414882" Jan 29 15:35:10 crc kubenswrapper[5008]: I0129 15:35:10.403629 5008 generic.go:334] "Generic (PLEG): container finished" podID="b4517208-d057-4652-a3c2-fb8374a45a04" containerID="d950744f523fe600a5ffb714a059bbb60099a47879bf08a43236c06ec7e7485d" exitCode=0 Jan 29 15:35:10 crc kubenswrapper[5008]: I0129 15:35:10.403749 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5br4h" event={"ID":"b4517208-d057-4652-a3c2-fb8374a45a04","Type":"ContainerDied","Data":"d950744f523fe600a5ffb714a059bbb60099a47879bf08a43236c06ec7e7485d"} Jan 29 15:35:11 crc kubenswrapper[5008]: I0129 15:35:11.411477 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5br4h" event={"ID":"b4517208-d057-4652-a3c2-fb8374a45a04","Type":"ContainerStarted","Data":"725bdcdc2388f6f1986bbce31dd59ec18775828a95c676633832ed8592276314"} Jan 29 15:35:11 crc kubenswrapper[5008]: I0129 15:35:11.432409 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5br4h" podStartSLOduration=2.87119727 podStartE2EDuration="5.432395929s" podCreationTimestamp="2026-01-29 15:35:06 +0000 UTC" firstStartedPulling="2026-01-29 15:35:08.382769168 +0000 UTC m=+452.055623405" lastFinishedPulling="2026-01-29 15:35:10.943967827 +0000 UTC m=+454.616822064" observedRunningTime="2026-01-29 15:35:11.430514513 +0000 UTC m=+455.103368750" watchObservedRunningTime="2026-01-29 15:35:11.432395929 +0000 UTC m=+455.105250166" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.203264 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.204502 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.242613 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.406295 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.406346 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.460683 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:15 crc kubenswrapper[5008]: I0129 15:35:15.440241 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5g5wg" podUID="5fbd5270-4a24-47ba-a0cf-0c3382a833c0" containerName="registry-server" probeResult="failure" output=< Jan 29 15:35:15 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:35:15 crc kubenswrapper[5008]: > Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.627075 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.627125 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.678833 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.800575 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.800991 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.837275 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:17 crc kubenswrapper[5008]: I0129 15:35:17.487366 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:17 crc kubenswrapper[5008]: I0129 15:35:17.488889 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:18 crc kubenswrapper[5008]: I0129 15:35:18.240498 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerName="registry" containerID="cri-o://30e2e1673271910cbbe5ac685fc8d9b9256d07c42ba932c22e18da6b153ba5d5" gracePeriod=30 Jan 29 15:35:22 crc kubenswrapper[5008]: I0129 15:35:22.112883 5008 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-qm54x container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.32:5000/healthz\": dial tcp 10.217.0.32:5000: connect: connection refused" start-of-body= Jan 29 15:35:22 crc kubenswrapper[5008]: I0129 15:35:22.113523 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.32:5000/healthz\": dial tcp 10.217.0.32:5000: connect: connection refused" Jan 29 15:35:23 crc kubenswrapper[5008]: I0129 15:35:23.969991 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057158 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057223 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057261 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057305 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsm4s\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057368 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057645 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057730 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057758 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.059223 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.059579 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.063990 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s" (OuterVolumeSpecName: "kube-api-access-tsm4s") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "kube-api-access-tsm4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.064359 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.064642 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.065986 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.075646 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.090836 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.158954 5008 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159008 5008 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159031 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159050 5008 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159068 5008 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159085 5008 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159102 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsm4s\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.303258 5008 generic.go:334] "Generic (PLEG): container finished" podID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerID="30e2e1673271910cbbe5ac685fc8d9b9256d07c42ba932c22e18da6b153ba5d5" exitCode=0 Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.303309 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" event={"ID":"30c54800-b443-4da8-9d41-22e8f156a1a1","Type":"ContainerDied","Data":"30e2e1673271910cbbe5ac685fc8d9b9256d07c42ba932c22e18da6b153ba5d5"} Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.303348 5008 scope.go:117] "RemoveContainer" containerID="30e2e1673271910cbbe5ac685fc8d9b9256d07c42ba932c22e18da6b153ba5d5" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.440643 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.478812 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:25 crc kubenswrapper[5008]: I0129 15:35:25.309992 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:35:25 crc kubenswrapper[5008]: I0129 15:35:25.310004 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" event={"ID":"30c54800-b443-4da8-9d41-22e8f156a1a1","Type":"ContainerDied","Data":"59462ccb837299ee29a72d7df21357033cdf6b013812c469de4c5ef1edbad70d"} Jan 29 15:35:25 crc kubenswrapper[5008]: I0129 15:35:25.340696 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:35:25 crc kubenswrapper[5008]: I0129 15:35:25.348293 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:35:27 crc kubenswrapper[5008]: I0129 15:35:27.330429 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" path="/var/lib/kubelet/pods/30c54800-b443-4da8-9d41-22e8f156a1a1/volumes" Jan 29 15:37:13 crc kubenswrapper[5008]: I0129 15:37:13.990427 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:37:13 crc kubenswrapper[5008]: I0129 15:37:13.991033 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:37:43 crc kubenswrapper[5008]: I0129 15:37:43.990960 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:37:43 crc kubenswrapper[5008]: I0129 15:37:43.991532 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:38:13 crc kubenswrapper[5008]: I0129 15:38:13.991650 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:38:13 crc kubenswrapper[5008]: I0129 15:38:13.992519 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:38:13 crc kubenswrapper[5008]: I0129 15:38:13.992592 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:38:13 crc kubenswrapper[5008]: I0129 15:38:13.993841 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:38:13 crc kubenswrapper[5008]: I0129 15:38:13.993994 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab" gracePeriod=600 Jan 29 15:38:14 crc kubenswrapper[5008]: I0129 15:38:14.338071 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab" exitCode=0 Jan 29 15:38:14 crc kubenswrapper[5008]: I0129 15:38:14.338160 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab"} Jan 29 15:38:14 crc kubenswrapper[5008]: I0129 15:38:14.338663 5008 scope.go:117] "RemoveContainer" containerID="1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521" Jan 29 15:38:15 crc kubenswrapper[5008]: I0129 15:38:15.348916 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef"} Jan 29 15:38:41 crc kubenswrapper[5008]: I0129 15:38:41.226725 5008 scope.go:117] "RemoveContainer" containerID="40321afd189e235fc1bb78923d74cb98e8fe85b88b55f9bd3844976bd07eb0f5" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.232713 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx"] Jan 29 15:40:03 crc kubenswrapper[5008]: E0129 15:40:03.234100 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerName="registry" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.234163 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerName="registry" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.234308 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerName="registry" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.234718 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.241341 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.241541 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.241662 5008 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-kdwxj" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.248216 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx"] Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.252767 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-fbjsd"] Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.253478 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fbjsd" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.258249 5008 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cw4ht" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.260055 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wvlhn"] Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.260777 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.264207 5008 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mhxb5" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.279434 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wvlhn"] Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.285860 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fbjsd"] Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.330357 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcv6\" (UniqueName: \"kubernetes.io/projected/1217edcf-8ec1-4354-8fbe-a9325b564932-kube-api-access-kjcv6\") pod \"cert-manager-cainjector-cf98fcc89-dvjtx\" (UID: \"1217edcf-8ec1-4354-8fbe-a9325b564932\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.431413 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgfpc\" (UniqueName: \"kubernetes.io/projected/346fd378-8582-44af-8332-dad183bddf6e-kube-api-access-dgfpc\") pod \"cert-manager-858654f9db-fbjsd\" (UID: \"346fd378-8582-44af-8332-dad183bddf6e\") " pod="cert-manager/cert-manager-858654f9db-fbjsd" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.431461 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcv6\" (UniqueName: \"kubernetes.io/projected/1217edcf-8ec1-4354-8fbe-a9325b564932-kube-api-access-kjcv6\") pod \"cert-manager-cainjector-cf98fcc89-dvjtx\" (UID: \"1217edcf-8ec1-4354-8fbe-a9325b564932\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.431575 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnxks\" (UniqueName: \"kubernetes.io/projected/6111be19-5e01-42e4-b4cf-3728e3ee4a6f-kube-api-access-tnxks\") pod \"cert-manager-webhook-687f57d79b-wvlhn\" (UID: \"6111be19-5e01-42e4-b4cf-3728e3ee4a6f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.454659 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcv6\" (UniqueName: \"kubernetes.io/projected/1217edcf-8ec1-4354-8fbe-a9325b564932-kube-api-access-kjcv6\") pod \"cert-manager-cainjector-cf98fcc89-dvjtx\" (UID: \"1217edcf-8ec1-4354-8fbe-a9325b564932\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.533070 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgfpc\" (UniqueName: \"kubernetes.io/projected/346fd378-8582-44af-8332-dad183bddf6e-kube-api-access-dgfpc\") pod \"cert-manager-858654f9db-fbjsd\" (UID: \"346fd378-8582-44af-8332-dad183bddf6e\") " pod="cert-manager/cert-manager-858654f9db-fbjsd" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.533176 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnxks\" (UniqueName: \"kubernetes.io/projected/6111be19-5e01-42e4-b4cf-3728e3ee4a6f-kube-api-access-tnxks\") pod \"cert-manager-webhook-687f57d79b-wvlhn\" (UID: \"6111be19-5e01-42e4-b4cf-3728e3ee4a6f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.552758 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.553498 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgfpc\" (UniqueName: \"kubernetes.io/projected/346fd378-8582-44af-8332-dad183bddf6e-kube-api-access-dgfpc\") pod \"cert-manager-858654f9db-fbjsd\" (UID: \"346fd378-8582-44af-8332-dad183bddf6e\") " pod="cert-manager/cert-manager-858654f9db-fbjsd" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.554537 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnxks\" (UniqueName: \"kubernetes.io/projected/6111be19-5e01-42e4-b4cf-3728e3ee4a6f-kube-api-access-tnxks\") pod \"cert-manager-webhook-687f57d79b-wvlhn\" (UID: \"6111be19-5e01-42e4-b4cf-3728e3ee4a6f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.572532 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fbjsd" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.578568 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.781563 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx"] Jan 29 15:40:03 crc kubenswrapper[5008]: W0129 15:40:03.790158 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1217edcf_8ec1_4354_8fbe_a9325b564932.slice/crio-216995277e5d30ed098dd19e52df235162e50b78436973c63d29c1f7f45df80d WatchSource:0}: Error finding container 216995277e5d30ed098dd19e52df235162e50b78436973c63d29c1f7f45df80d: Status 404 returned error can't find the container with id 216995277e5d30ed098dd19e52df235162e50b78436973c63d29c1f7f45df80d Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.792839 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.824816 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fbjsd"] Jan 29 15:40:03 crc kubenswrapper[5008]: W0129 15:40:03.831119 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod346fd378_8582_44af_8332_dad183bddf6e.slice/crio-5c0ad503af1b2db8df1b5e71d1b0785a05ae8120e4e93b5a2efec461db0432f0 WatchSource:0}: Error finding container 5c0ad503af1b2db8df1b5e71d1b0785a05ae8120e4e93b5a2efec461db0432f0: Status 404 returned error can't find the container with id 5c0ad503af1b2db8df1b5e71d1b0785a05ae8120e4e93b5a2efec461db0432f0 Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.860103 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wvlhn"] Jan 29 15:40:03 crc kubenswrapper[5008]: W0129 15:40:03.863577 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6111be19_5e01_42e4_b4cf_3728e3ee4a6f.slice/crio-4abd6fd6a3fecf5bc465dfd724e04ab141b2c033e5c8b08ab8a920b1a01351a7 WatchSource:0}: Error finding container 4abd6fd6a3fecf5bc465dfd724e04ab141b2c033e5c8b08ab8a920b1a01351a7: Status 404 returned error can't find the container with id 4abd6fd6a3fecf5bc465dfd724e04ab141b2c033e5c8b08ab8a920b1a01351a7 Jan 29 15:40:04 crc kubenswrapper[5008]: I0129 15:40:04.017442 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" event={"ID":"6111be19-5e01-42e4-b4cf-3728e3ee4a6f","Type":"ContainerStarted","Data":"4abd6fd6a3fecf5bc465dfd724e04ab141b2c033e5c8b08ab8a920b1a01351a7"} Jan 29 15:40:04 crc kubenswrapper[5008]: I0129 15:40:04.018901 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" event={"ID":"1217edcf-8ec1-4354-8fbe-a9325b564932","Type":"ContainerStarted","Data":"216995277e5d30ed098dd19e52df235162e50b78436973c63d29c1f7f45df80d"} Jan 29 15:40:04 crc kubenswrapper[5008]: I0129 15:40:04.020163 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fbjsd" event={"ID":"346fd378-8582-44af-8332-dad183bddf6e","Type":"ContainerStarted","Data":"5c0ad503af1b2db8df1b5e71d1b0785a05ae8120e4e93b5a2efec461db0432f0"} Jan 29 15:40:09 crc kubenswrapper[5008]: I0129 15:40:09.051587 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fbjsd" event={"ID":"346fd378-8582-44af-8332-dad183bddf6e","Type":"ContainerStarted","Data":"34bfddb8b2aa4c65caa162750a2c933a9e28ae7f64daf2f02258b413a9bf62fd"} Jan 29 15:40:09 crc kubenswrapper[5008]: I0129 15:40:09.071160 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-fbjsd" podStartSLOduration=1.2556275669999999 podStartE2EDuration="6.071138678s" podCreationTimestamp="2026-01-29 15:40:03 +0000 UTC" firstStartedPulling="2026-01-29 15:40:03.836996114 +0000 UTC m=+747.509850351" lastFinishedPulling="2026-01-29 15:40:08.652507235 +0000 UTC m=+752.325361462" observedRunningTime="2026-01-29 15:40:09.066360792 +0000 UTC m=+752.739215049" watchObservedRunningTime="2026-01-29 15:40:09.071138678 +0000 UTC m=+752.743992925" Jan 29 15:40:10 crc kubenswrapper[5008]: I0129 15:40:10.063300 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" event={"ID":"6111be19-5e01-42e4-b4cf-3728e3ee4a6f","Type":"ContainerStarted","Data":"fb7241b0aeb8cb74e9b6bbc1ffbe469525775788ea0f59db5ce9eeb1fa467092"} Jan 29 15:40:10 crc kubenswrapper[5008]: I0129 15:40:10.063686 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:10 crc kubenswrapper[5008]: I0129 15:40:10.065279 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" event={"ID":"1217edcf-8ec1-4354-8fbe-a9325b564932","Type":"ContainerStarted","Data":"9e452f590aa4034f58c037627564780fcf4c1501ec00ba88da98d01c3b1a302c"} Jan 29 15:40:10 crc kubenswrapper[5008]: I0129 15:40:10.085589 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" podStartSLOduration=1.460760493 podStartE2EDuration="7.085570582s" podCreationTimestamp="2026-01-29 15:40:03 +0000 UTC" firstStartedPulling="2026-01-29 15:40:03.865566256 +0000 UTC m=+747.538420493" lastFinishedPulling="2026-01-29 15:40:09.490376345 +0000 UTC m=+753.163230582" observedRunningTime="2026-01-29 15:40:10.080653204 +0000 UTC m=+753.753507441" watchObservedRunningTime="2026-01-29 15:40:10.085570582 +0000 UTC m=+753.758424819" Jan 29 15:40:10 crc kubenswrapper[5008]: I0129 15:40:10.099625 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" podStartSLOduration=1.492104221 podStartE2EDuration="7.099606052s" podCreationTimestamp="2026-01-29 15:40:03 +0000 UTC" firstStartedPulling="2026-01-29 15:40:03.792510557 +0000 UTC m=+747.465364794" lastFinishedPulling="2026-01-29 15:40:09.400012388 +0000 UTC m=+753.072866625" observedRunningTime="2026-01-29 15:40:10.095627275 +0000 UTC m=+753.768481512" watchObservedRunningTime="2026-01-29 15:40:10.099606052 +0000 UTC m=+753.772460309" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.021751 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqg9w"] Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.022385 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-controller" containerID="cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.022913 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="sbdb" containerID="cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.022991 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="nbdb" containerID="cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.023087 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="northd" containerID="cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.023156 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-acl-logging" containerID="cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.023226 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.023210 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-node" containerID="cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.088970 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" containerID="cri-o://f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.819043 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/3.log" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.823064 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovn-acl-logging/0.log" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.823976 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovn-controller/0.log" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.824684 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892193 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j9h2f"] Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892432 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-node" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892450 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-node" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892462 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892471 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892486 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="northd" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892495 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="northd" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892507 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="sbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892515 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="sbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892530 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892538 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892547 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892555 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892566 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892574 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892584 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892591 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892605 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="nbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892612 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="nbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892627 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892636 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892648 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kubecfg-setup" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892657 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kubecfg-setup" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892670 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-acl-logging" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892679 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-acl-logging" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892837 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="northd" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892853 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892865 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892875 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="nbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892887 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="sbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892898 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892907 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892917 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892927 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892936 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892949 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-acl-logging" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892961 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-node" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.893081 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.893090 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.895466 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995223 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995287 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995324 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995361 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995390 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995419 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995459 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2xcc\" (UniqueName: \"kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995498 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995544 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995597 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995668 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995696 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995726 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995759 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995808 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995846 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995873 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995910 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995940 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995979 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.997093 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.997136 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.997165 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.997192 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.999663 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.999910 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.001019 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash" (OuterVolumeSpecName: "host-slash") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002050 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002118 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002117 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002202 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log" (OuterVolumeSpecName: "node-log") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002205 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002327 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002347 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket" (OuterVolumeSpecName: "log-socket") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002483 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002965 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.003061 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.006086 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc" (OuterVolumeSpecName: "kube-api-access-d2xcc") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "kube-api-access-d2xcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.008370 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.028436 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.091430 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/2.log" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.092006 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/1.log" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.092050 5008 generic.go:334] "Generic (PLEG): container finished" podID="cdd8ae23-3f9f-49f8-928d-46dad823fde4" containerID="a79b05ecc77ae822ab75bfdce779bbfbb375857cfbf47a090a83a690373dc6e0" exitCode=2 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.092111 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerDied","Data":"a79b05ecc77ae822ab75bfdce779bbfbb375857cfbf47a090a83a690373dc6e0"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.092153 5008 scope.go:117] "RemoveContainer" containerID="af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.092720 5008 scope.go:117] "RemoveContainer" containerID="a79b05ecc77ae822ab75bfdce779bbfbb375857cfbf47a090a83a690373dc6e0" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.097267 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-node-log\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.097477 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.097594 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-netd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.097659 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/3.log" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098148 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-var-lib-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098198 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098246 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-etc-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098282 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-systemd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098466 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-slash\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098679 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-netns\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098727 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-bin\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098759 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-kubelet\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098801 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-log-socket\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098829 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-ovn\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098891 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.099013 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-systemd-units\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.099080 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s8mt\" (UniqueName: \"kubernetes.io/projected/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-kube-api-access-6s8mt\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.099147 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-config\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.099224 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-env-overrides\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.099302 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-script-lib\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100142 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovn-node-metrics-cert\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100558 5008 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100596 5008 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100617 5008 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100634 5008 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100651 5008 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100673 5008 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100692 5008 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100712 5008 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100729 5008 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100746 5008 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100762 5008 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100803 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2xcc\" (UniqueName: \"kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100821 5008 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100838 5008 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100858 5008 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100876 5008 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100895 5008 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100913 5008 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100951 5008 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100969 5008 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.102050 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovn-acl-logging/0.log" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.102699 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovn-controller/0.log" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103382 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103443 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103463 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103498 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103518 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103537 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103471 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103570 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103587 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103600 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103612 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" exitCode=143 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103624 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" exitCode=143 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103642 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103658 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103673 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103691 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103707 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103716 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103725 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103732 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103739 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103747 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103754 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103762 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103770 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103797 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103810 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103821 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103828 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103835 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103842 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103849 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103856 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103864 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103871 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103877 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103889 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103900 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103911 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103918 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103925 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103933 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103940 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103947 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103954 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103961 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103969 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103979 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"3ed021c49019edf6db353db02ef3c36191fef92186df2ed16a92920dd439b3d2"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103991 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103999 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104006 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104013 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104019 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104026 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104034 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104041 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104048 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104054 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.193546 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqg9w"] Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.196380 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqg9w"] Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209050 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-env-overrides\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209084 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-script-lib\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209105 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovn-node-metrics-cert\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209133 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-node-log\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209148 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209162 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-netd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209184 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-var-lib-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209198 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209236 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-etc-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209257 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-systemd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209272 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-slash\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209287 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-netns\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209308 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-bin\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209366 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-kubelet\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209381 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-log-socket\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209397 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-ovn\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209416 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209421 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-var-lib-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209434 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-systemd-units\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209484 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209504 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s8mt\" (UniqueName: \"kubernetes.io/projected/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-kube-api-access-6s8mt\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209533 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-config\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210353 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-config\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210396 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-kubelet\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210462 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-bin\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210471 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-log-socket\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210520 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-ovn\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210567 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210574 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-etc-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210600 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-systemd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210621 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-slash\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210642 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-netns\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210666 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-node-log\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210737 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-env-overrides\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210743 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-netd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209467 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-systemd-units\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210852 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.211467 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-script-lib\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.217325 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovn-node-metrics-cert\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.240936 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s8mt\" (UniqueName: \"kubernetes.io/projected/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-kube-api-access-6s8mt\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.520117 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: W0129 15:40:14.550007 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod252dea6f_dc2c_4c83_8930_535e5b0f6cdb.slice/crio-ae1ef17fa87e70552ab49e9b4a89f9dfbeaebd92cd6bd29ade10978c8c8d56a4 WatchSource:0}: Error finding container ae1ef17fa87e70552ab49e9b4a89f9dfbeaebd92cd6bd29ade10978c8c8d56a4: Status 404 returned error can't find the container with id ae1ef17fa87e70552ab49e9b4a89f9dfbeaebd92cd6bd29ade10978c8c8d56a4 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.637296 5008 scope.go:117] "RemoveContainer" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.664270 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.690266 5008 scope.go:117] "RemoveContainer" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.713496 5008 scope.go:117] "RemoveContainer" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.736173 5008 scope.go:117] "RemoveContainer" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.764950 5008 scope.go:117] "RemoveContainer" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.783322 5008 scope.go:117] "RemoveContainer" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.803405 5008 scope.go:117] "RemoveContainer" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.821398 5008 scope.go:117] "RemoveContainer" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.843933 5008 scope.go:117] "RemoveContainer" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.861837 5008 scope.go:117] "RemoveContainer" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.862210 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": container with ID starting with f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c not found: ID does not exist" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.862248 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} err="failed to get container status \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": rpc error: code = NotFound desc = could not find container \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": container with ID starting with f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.862278 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.863495 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": container with ID starting with c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9 not found: ID does not exist" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.863531 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} err="failed to get container status \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": rpc error: code = NotFound desc = could not find container \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": container with ID starting with c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.863544 5008 scope.go:117] "RemoveContainer" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.863932 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": container with ID starting with dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195 not found: ID does not exist" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.863981 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} err="failed to get container status \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": rpc error: code = NotFound desc = could not find container \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": container with ID starting with dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864010 5008 scope.go:117] "RemoveContainer" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.864253 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": container with ID starting with eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1 not found: ID does not exist" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864283 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} err="failed to get container status \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": rpc error: code = NotFound desc = could not find container \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": container with ID starting with eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864308 5008 scope.go:117] "RemoveContainer" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.864518 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": container with ID starting with b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420 not found: ID does not exist" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864568 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} err="failed to get container status \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": rpc error: code = NotFound desc = could not find container \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": container with ID starting with b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864583 5008 scope.go:117] "RemoveContainer" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.864870 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": container with ID starting with 84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5 not found: ID does not exist" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864919 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} err="failed to get container status \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": rpc error: code = NotFound desc = could not find container \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": container with ID starting with 84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864935 5008 scope.go:117] "RemoveContainer" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.865149 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": container with ID starting with 08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1 not found: ID does not exist" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.865176 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} err="failed to get container status \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": rpc error: code = NotFound desc = could not find container \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": container with ID starting with 08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.865189 5008 scope.go:117] "RemoveContainer" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.865671 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": container with ID starting with 3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554 not found: ID does not exist" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.865746 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} err="failed to get container status \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": rpc error: code = NotFound desc = could not find container \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": container with ID starting with 3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.865760 5008 scope.go:117] "RemoveContainer" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.866126 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": container with ID starting with 676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8 not found: ID does not exist" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.866202 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} err="failed to get container status \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": rpc error: code = NotFound desc = could not find container \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": container with ID starting with 676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.866261 5008 scope.go:117] "RemoveContainer" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.866642 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": container with ID starting with 6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6 not found: ID does not exist" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.866700 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} err="failed to get container status \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": rpc error: code = NotFound desc = could not find container \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": container with ID starting with 6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.866715 5008 scope.go:117] "RemoveContainer" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.866947 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} err="failed to get container status \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": rpc error: code = NotFound desc = could not find container \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": container with ID starting with f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867003 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867257 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} err="failed to get container status \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": rpc error: code = NotFound desc = could not find container \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": container with ID starting with c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867277 5008 scope.go:117] "RemoveContainer" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867532 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} err="failed to get container status \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": rpc error: code = NotFound desc = could not find container \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": container with ID starting with dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867552 5008 scope.go:117] "RemoveContainer" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867693 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} err="failed to get container status \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": rpc error: code = NotFound desc = could not find container \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": container with ID starting with eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867707 5008 scope.go:117] "RemoveContainer" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867976 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} err="failed to get container status \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": rpc error: code = NotFound desc = could not find container \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": container with ID starting with b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867998 5008 scope.go:117] "RemoveContainer" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868274 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} err="failed to get container status \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": rpc error: code = NotFound desc = could not find container \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": container with ID starting with 84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868301 5008 scope.go:117] "RemoveContainer" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868476 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} err="failed to get container status \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": rpc error: code = NotFound desc = could not find container \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": container with ID starting with 08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868500 5008 scope.go:117] "RemoveContainer" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868819 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} err="failed to get container status \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": rpc error: code = NotFound desc = could not find container \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": container with ID starting with 3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868850 5008 scope.go:117] "RemoveContainer" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.869937 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} err="failed to get container status \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": rpc error: code = NotFound desc = could not find container \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": container with ID starting with 676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.869959 5008 scope.go:117] "RemoveContainer" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870140 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} err="failed to get container status \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": rpc error: code = NotFound desc = could not find container \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": container with ID starting with 6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870155 5008 scope.go:117] "RemoveContainer" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870324 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} err="failed to get container status \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": rpc error: code = NotFound desc = could not find container \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": container with ID starting with f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870336 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870489 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} err="failed to get container status \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": rpc error: code = NotFound desc = could not find container \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": container with ID starting with c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870502 5008 scope.go:117] "RemoveContainer" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870635 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} err="failed to get container status \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": rpc error: code = NotFound desc = could not find container \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": container with ID starting with dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870646 5008 scope.go:117] "RemoveContainer" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870768 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} err="failed to get container status \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": rpc error: code = NotFound desc = could not find container \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": container with ID starting with eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870822 5008 scope.go:117] "RemoveContainer" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870974 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} err="failed to get container status \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": rpc error: code = NotFound desc = could not find container \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": container with ID starting with b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870987 5008 scope.go:117] "RemoveContainer" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871183 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} err="failed to get container status \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": rpc error: code = NotFound desc = could not find container \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": container with ID starting with 84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871195 5008 scope.go:117] "RemoveContainer" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871413 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} err="failed to get container status \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": rpc error: code = NotFound desc = could not find container \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": container with ID starting with 08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871440 5008 scope.go:117] "RemoveContainer" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871635 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} err="failed to get container status \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": rpc error: code = NotFound desc = could not find container \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": container with ID starting with 3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871649 5008 scope.go:117] "RemoveContainer" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871858 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} err="failed to get container status \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": rpc error: code = NotFound desc = could not find container \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": container with ID starting with 676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871883 5008 scope.go:117] "RemoveContainer" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872017 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} err="failed to get container status \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": rpc error: code = NotFound desc = could not find container \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": container with ID starting with 6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872029 5008 scope.go:117] "RemoveContainer" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872157 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} err="failed to get container status \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": rpc error: code = NotFound desc = could not find container \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": container with ID starting with f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872177 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872447 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} err="failed to get container status \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": rpc error: code = NotFound desc = could not find container \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": container with ID starting with c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872496 5008 scope.go:117] "RemoveContainer" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.873902 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} err="failed to get container status \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": rpc error: code = NotFound desc = could not find container \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": container with ID starting with dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.873927 5008 scope.go:117] "RemoveContainer" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874129 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} err="failed to get container status \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": rpc error: code = NotFound desc = could not find container \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": container with ID starting with eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874145 5008 scope.go:117] "RemoveContainer" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874302 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} err="failed to get container status \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": rpc error: code = NotFound desc = could not find container \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": container with ID starting with b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874323 5008 scope.go:117] "RemoveContainer" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874492 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} err="failed to get container status \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": rpc error: code = NotFound desc = could not find container \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": container with ID starting with 84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874507 5008 scope.go:117] "RemoveContainer" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874654 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} err="failed to get container status \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": rpc error: code = NotFound desc = could not find container \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": container with ID starting with 08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874671 5008 scope.go:117] "RemoveContainer" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874811 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} err="failed to get container status \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": rpc error: code = NotFound desc = could not find container \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": container with ID starting with 3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874823 5008 scope.go:117] "RemoveContainer" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874946 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} err="failed to get container status \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": rpc error: code = NotFound desc = could not find container \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": container with ID starting with 676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874958 5008 scope.go:117] "RemoveContainer" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.875077 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} err="failed to get container status \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": rpc error: code = NotFound desc = could not find container \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": container with ID starting with 6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6 not found: ID does not exist" Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.111309 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/2.log" Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.111501 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerStarted","Data":"d46a39529b97b61410e13a7d9304aa0dd14dbc6d16966288979eb24becea51db"} Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.114036 5008 generic.go:334] "Generic (PLEG): container finished" podID="252dea6f-dc2c-4c83-8930-535e5b0f6cdb" containerID="bb73a6ec1fa63921bda754c59c764e3d2bd7db1e1393b4a9216781bf6be1c628" exitCode=0 Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.114122 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerDied","Data":"bb73a6ec1fa63921bda754c59c764e3d2bd7db1e1393b4a9216781bf6be1c628"} Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.114176 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"ae1ef17fa87e70552ab49e9b4a89f9dfbeaebd92cd6bd29ade10978c8c8d56a4"} Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.342445 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d092513-7735-4c98-9734-57bc46b99280" path="/var/lib/kubelet/pods/1d092513-7735-4c98-9734-57bc46b99280/volumes" Jan 29 15:40:16 crc kubenswrapper[5008]: I0129 15:40:16.134425 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"dd7fd4620a00cce2e9471153be8380bfedf0aa02f055f8c8eb7c8213056c94cc"} Jan 29 15:40:16 crc kubenswrapper[5008]: I0129 15:40:16.134480 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"72c1ca5d6995c298daadff32b340b5c8ac9f6657fe5d11b2744d0d7bc88498cc"} Jan 29 15:40:16 crc kubenswrapper[5008]: I0129 15:40:16.134558 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"243c6d38dfa7aec0df8e192e309178df3c79e26176d7e0c55ec79c45bd588bd2"} Jan 29 15:40:16 crc kubenswrapper[5008]: I0129 15:40:16.134577 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"c9d3d074e21fba1b8b2b0b6c9b269e1ea430aaf44a008d77d3577f7b0c3f056c"} Jan 29 15:40:16 crc kubenswrapper[5008]: I0129 15:40:16.134597 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"070cb0ab65e3f22ee8eb14977bc7ad1cd9a0ce4c6bae2bf411b38ae768696216"} Jan 29 15:40:17 crc kubenswrapper[5008]: I0129 15:40:17.143579 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"54228a876c93cb98d1f7f195a35b2db9846b1a211ccb6604cf4b5f4cc5e72ae0"} Jan 29 15:40:18 crc kubenswrapper[5008]: I0129 15:40:18.581644 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:19 crc kubenswrapper[5008]: I0129 15:40:19.158175 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"0e385850439d8343e6a7c9a32f04d03f3a179368ab5366fd5c6c2a330cff055a"} Jan 29 15:40:21 crc kubenswrapper[5008]: I0129 15:40:21.177714 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"3efbffe74014b04edd9c26fd66f4583e39dde1552fa69a8c378893b640904fe3"} Jan 29 15:40:21 crc kubenswrapper[5008]: I0129 15:40:21.178192 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:21 crc kubenswrapper[5008]: I0129 15:40:21.178246 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:21 crc kubenswrapper[5008]: I0129 15:40:21.223096 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:21 crc kubenswrapper[5008]: I0129 15:40:21.224922 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" podStartSLOduration=8.224901451 podStartE2EDuration="8.224901451s" podCreationTimestamp="2026-01-29 15:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:40:21.219238914 +0000 UTC m=+764.892093171" watchObservedRunningTime="2026-01-29 15:40:21.224901451 +0000 UTC m=+764.897755728" Jan 29 15:40:22 crc kubenswrapper[5008]: I0129 15:40:22.184616 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:22 crc kubenswrapper[5008]: I0129 15:40:22.216627 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:22 crc kubenswrapper[5008]: I0129 15:40:22.823915 5008 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 15:40:43 crc kubenswrapper[5008]: I0129 15:40:43.990679 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:40:43 crc kubenswrapper[5008]: I0129 15:40:43.991367 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:40:44 crc kubenswrapper[5008]: I0129 15:40:44.553176 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.276490 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s"] Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.278970 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.284306 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.293230 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s"] Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.373492 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.373631 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvch\" (UniqueName: \"kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.373834 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.475492 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vvch\" (UniqueName: \"kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.475558 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.475626 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.476162 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.476572 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.503254 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vvch\" (UniqueName: \"kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.626517 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:02 crc kubenswrapper[5008]: I0129 15:41:02.083857 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s"] Jan 29 15:41:02 crc kubenswrapper[5008]: I0129 15:41:02.439231 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerStarted","Data":"45c0d3bfc02dd3d17f027c6ab3a004555b3fd15eea464fca480ac0ab9176088b"} Jan 29 15:41:02 crc kubenswrapper[5008]: I0129 15:41:02.439678 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerStarted","Data":"a322a92a50a11512f291e3cd16751143a7c8cc3846e26fe4b2393775dd3b9eb4"} Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.447699 5008 generic.go:334] "Generic (PLEG): container finished" podID="d4466921-85af-471c-956d-71f6576ca8f1" containerID="45c0d3bfc02dd3d17f027c6ab3a004555b3fd15eea464fca480ac0ab9176088b" exitCode=0 Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.447765 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerDied","Data":"45c0d3bfc02dd3d17f027c6ab3a004555b3fd15eea464fca480ac0ab9176088b"} Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.571351 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.572763 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.581991 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.707271 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.707306 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmc9\" (UniqueName: \"kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.707418 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.809231 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.809290 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpmc9\" (UniqueName: \"kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.809359 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.809719 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.809813 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.830804 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpmc9\" (UniqueName: \"kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.925861 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:04 crc kubenswrapper[5008]: I0129 15:41:04.170293 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:04 crc kubenswrapper[5008]: I0129 15:41:04.454549 5008 generic.go:334] "Generic (PLEG): container finished" podID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerID="cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274" exitCode=0 Jan 29 15:41:04 crc kubenswrapper[5008]: I0129 15:41:04.454639 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerDied","Data":"cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274"} Jan 29 15:41:04 crc kubenswrapper[5008]: I0129 15:41:04.454927 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerStarted","Data":"2b7ddcb62ecbf05357c096771ab213c80505ee7aadc4ebe5c0c9a2c9f79dd618"} Jan 29 15:41:05 crc kubenswrapper[5008]: I0129 15:41:05.464325 5008 generic.go:334] "Generic (PLEG): container finished" podID="d4466921-85af-471c-956d-71f6576ca8f1" containerID="52c8b124b65393d43aabd0cbd342b413321b2e5bbf39a0745c27f0859f1430c4" exitCode=0 Jan 29 15:41:05 crc kubenswrapper[5008]: I0129 15:41:05.464772 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerDied","Data":"52c8b124b65393d43aabd0cbd342b413321b2e5bbf39a0745c27f0859f1430c4"} Jan 29 15:41:06 crc kubenswrapper[5008]: I0129 15:41:06.478244 5008 generic.go:334] "Generic (PLEG): container finished" podID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerID="8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4" exitCode=0 Jan 29 15:41:06 crc kubenswrapper[5008]: I0129 15:41:06.478396 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerDied","Data":"8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4"} Jan 29 15:41:06 crc kubenswrapper[5008]: I0129 15:41:06.486735 5008 generic.go:334] "Generic (PLEG): container finished" podID="d4466921-85af-471c-956d-71f6576ca8f1" containerID="e209c509177916138d041664c5ab18ee9523ce749806cd585afa3713d8559e13" exitCode=0 Jan 29 15:41:06 crc kubenswrapper[5008]: I0129 15:41:06.486841 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerDied","Data":"e209c509177916138d041664c5ab18ee9523ce749806cd585afa3713d8559e13"} Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.500500 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerStarted","Data":"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a"} Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.522120 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l4krl" podStartSLOduration=1.858617206 podStartE2EDuration="4.522105705s" podCreationTimestamp="2026-01-29 15:41:03 +0000 UTC" firstStartedPulling="2026-01-29 15:41:04.459438283 +0000 UTC m=+808.132292530" lastFinishedPulling="2026-01-29 15:41:07.122926762 +0000 UTC m=+810.795781029" observedRunningTime="2026-01-29 15:41:07.520727551 +0000 UTC m=+811.193581788" watchObservedRunningTime="2026-01-29 15:41:07.522105705 +0000 UTC m=+811.194959942" Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.731003 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.861641 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util\") pod \"d4466921-85af-471c-956d-71f6576ca8f1\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.861808 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vvch\" (UniqueName: \"kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch\") pod \"d4466921-85af-471c-956d-71f6576ca8f1\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.861857 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle\") pod \"d4466921-85af-471c-956d-71f6576ca8f1\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.862580 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle" (OuterVolumeSpecName: "bundle") pod "d4466921-85af-471c-956d-71f6576ca8f1" (UID: "d4466921-85af-471c-956d-71f6576ca8f1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.873049 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch" (OuterVolumeSpecName: "kube-api-access-9vvch") pod "d4466921-85af-471c-956d-71f6576ca8f1" (UID: "d4466921-85af-471c-956d-71f6576ca8f1"). InnerVolumeSpecName "kube-api-access-9vvch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.963320 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vvch\" (UniqueName: \"kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.963369 5008 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:08 crc kubenswrapper[5008]: I0129 15:41:08.014760 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util" (OuterVolumeSpecName: "util") pod "d4466921-85af-471c-956d-71f6576ca8f1" (UID: "d4466921-85af-471c-956d-71f6576ca8f1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:41:08 crc kubenswrapper[5008]: I0129 15:41:08.064631 5008 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:08 crc kubenswrapper[5008]: I0129 15:41:08.508944 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerDied","Data":"a322a92a50a11512f291e3cd16751143a7c8cc3846e26fe4b2393775dd3b9eb4"} Jan 29 15:41:08 crc kubenswrapper[5008]: I0129 15:41:08.509018 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a322a92a50a11512f291e3cd16751143a7c8cc3846e26fe4b2393775dd3b9eb4" Jan 29 15:41:08 crc kubenswrapper[5008]: I0129 15:41:08.509079 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.888642 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dkpn2"] Jan 29 15:41:10 crc kubenswrapper[5008]: E0129 15:41:10.889203 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="extract" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.889217 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="extract" Jan 29 15:41:10 crc kubenswrapper[5008]: E0129 15:41:10.889231 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="util" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.889237 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="util" Jan 29 15:41:10 crc kubenswrapper[5008]: E0129 15:41:10.889252 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="pull" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.889259 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="pull" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.889366 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="extract" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.889799 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.891521 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.891537 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.894794 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8mbqk" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.899043 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dkpn2"] Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.003465 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgw85\" (UniqueName: \"kubernetes.io/projected/5fab4312-8998-4667-af25-ba459fcb4a68-kube-api-access-xgw85\") pod \"nmstate-operator-646758c888-dkpn2\" (UID: \"5fab4312-8998-4667-af25-ba459fcb4a68\") " pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.105031 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgw85\" (UniqueName: \"kubernetes.io/projected/5fab4312-8998-4667-af25-ba459fcb4a68-kube-api-access-xgw85\") pod \"nmstate-operator-646758c888-dkpn2\" (UID: \"5fab4312-8998-4667-af25-ba459fcb4a68\") " pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.123305 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgw85\" (UniqueName: \"kubernetes.io/projected/5fab4312-8998-4667-af25-ba459fcb4a68-kube-api-access-xgw85\") pod \"nmstate-operator-646758c888-dkpn2\" (UID: \"5fab4312-8998-4667-af25-ba459fcb4a68\") " pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.203220 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.401596 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dkpn2"] Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.526970 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" event={"ID":"5fab4312-8998-4667-af25-ba459fcb4a68","Type":"ContainerStarted","Data":"cec16a797b95797585798004c4b06dcbd977b678809533a539c0fa270affa418"} Jan 29 15:41:13 crc kubenswrapper[5008]: I0129 15:41:13.928027 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:13 crc kubenswrapper[5008]: I0129 15:41:13.928390 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:13 crc kubenswrapper[5008]: I0129 15:41:13.990976 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:41:13 crc kubenswrapper[5008]: I0129 15:41:13.991068 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:41:14 crc kubenswrapper[5008]: I0129 15:41:14.988840 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l4krl" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="registry-server" probeResult="failure" output=< Jan 29 15:41:14 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:41:14 crc kubenswrapper[5008]: > Jan 29 15:41:18 crc kubenswrapper[5008]: I0129 15:41:18.584061 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" event={"ID":"5fab4312-8998-4667-af25-ba459fcb4a68","Type":"ContainerStarted","Data":"999f53e8a1e30402259b51e8007e6fc217a82447d0da60a1d1277a177303b708"} Jan 29 15:41:18 crc kubenswrapper[5008]: I0129 15:41:18.611472 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" podStartSLOduration=2.064072083 podStartE2EDuration="8.611449144s" podCreationTimestamp="2026-01-29 15:41:10 +0000 UTC" firstStartedPulling="2026-01-29 15:41:11.413804104 +0000 UTC m=+815.086658351" lastFinishedPulling="2026-01-29 15:41:17.961181165 +0000 UTC m=+821.634035412" observedRunningTime="2026-01-29 15:41:18.611365662 +0000 UTC m=+822.284219979" watchObservedRunningTime="2026-01-29 15:41:18.611449144 +0000 UTC m=+822.284303421" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.550172 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mtz4q"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.551693 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.554019 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-72hz4" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.563848 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mtz4q"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.572796 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.573968 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.576586 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.604722 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8hxxx"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.605800 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.636897 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.641682 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-nmstate-lock\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.641849 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6a7e5f12-26c5-4197-81ed-559569651fab-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.641951 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfb8\" (UniqueName: \"kubernetes.io/projected/6a7e5f12-26c5-4197-81ed-559569651fab-kube-api-access-qcfb8\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.642140 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-dbus-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.642247 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-ovs-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.642290 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkhxt\" (UniqueName: \"kubernetes.io/projected/beee9730-825d-4a7e-9ef1-d735b1bddd07-kube-api-access-hkhxt\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.642451 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqz8v\" (UniqueName: \"kubernetes.io/projected/5379965a-18ce-41a4-8753-7a70ed4a5efc-kube-api-access-cqz8v\") pod \"nmstate-metrics-54757c584b-mtz4q\" (UID: \"5379965a-18ce-41a4-8753-7a70ed4a5efc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.686363 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.689933 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.696074 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.698428 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ls5ll" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.698469 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.698496 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743583 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcfb8\" (UniqueName: \"kubernetes.io/projected/6a7e5f12-26c5-4197-81ed-559569651fab-kube-api-access-qcfb8\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743647 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-dbus-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743679 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-ovs-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743704 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkhxt\" (UniqueName: \"kubernetes.io/projected/beee9730-825d-4a7e-9ef1-d735b1bddd07-kube-api-access-hkhxt\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743740 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743775 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqz8v\" (UniqueName: \"kubernetes.io/projected/5379965a-18ce-41a4-8753-7a70ed4a5efc-kube-api-access-cqz8v\") pod \"nmstate-metrics-54757c584b-mtz4q\" (UID: \"5379965a-18ce-41a4-8753-7a70ed4a5efc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743818 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-nmstate-lock\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743841 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kzz5\" (UniqueName: \"kubernetes.io/projected/75f20405-b349-4e5f-ba1a-b6bf348766ce-kube-api-access-2kzz5\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743881 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/75f20405-b349-4e5f-ba1a-b6bf348766ce-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743912 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6a7e5f12-26c5-4197-81ed-559569651fab-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.744127 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-dbus-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.744426 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-nmstate-lock\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.744602 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-ovs-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.757405 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6a7e5f12-26c5-4197-81ed-559569651fab-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.760521 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqz8v\" (UniqueName: \"kubernetes.io/projected/5379965a-18ce-41a4-8753-7a70ed4a5efc-kube-api-access-cqz8v\") pod \"nmstate-metrics-54757c584b-mtz4q\" (UID: \"5379965a-18ce-41a4-8753-7a70ed4a5efc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.760589 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcfb8\" (UniqueName: \"kubernetes.io/projected/6a7e5f12-26c5-4197-81ed-559569651fab-kube-api-access-qcfb8\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.761171 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkhxt\" (UniqueName: \"kubernetes.io/projected/beee9730-825d-4a7e-9ef1-d735b1bddd07-kube-api-access-hkhxt\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.845361 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.845429 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kzz5\" (UniqueName: \"kubernetes.io/projected/75f20405-b349-4e5f-ba1a-b6bf348766ce-kube-api-access-2kzz5\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.845458 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/75f20405-b349-4e5f-ba1a-b6bf348766ce-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: E0129 15:41:21.845646 5008 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 29 15:41:21 crc kubenswrapper[5008]: E0129 15:41:21.845796 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert podName:75f20405-b349-4e5f-ba1a-b6bf348766ce nodeName:}" failed. No retries permitted until 2026-01-29 15:41:22.345741101 +0000 UTC m=+826.018595338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-dvn47" (UID: "75f20405-b349-4e5f-ba1a-b6bf348766ce") : secret "plugin-serving-cert" not found Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.846346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/75f20405-b349-4e5f-ba1a-b6bf348766ce-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.866764 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kzz5\" (UniqueName: \"kubernetes.io/projected/75f20405-b349-4e5f-ba1a-b6bf348766ce-kube-api-access-2kzz5\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.871167 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.884225 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7784897869-4b45r"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.885888 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.893501 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.897002 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7784897869-4b45r"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.927661 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946532 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-service-ca\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946590 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946622 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-oauth-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946671 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwsw\" (UniqueName: \"kubernetes.io/projected/5e6ae51d-3821-446e-9067-fa071506ad47-kube-api-access-hdwsw\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946705 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-oauth-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946728 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-console-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946757 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-trusted-ca-bundle\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: W0129 15:41:21.958832 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeee9730_825d_4a7e_9ef1_d735b1bddd07.slice/crio-1d31c29c216e09183f799a1201be0a962527f8782edd86fb0cac17ac946a7021 WatchSource:0}: Error finding container 1d31c29c216e09183f799a1201be0a962527f8782edd86fb0cac17ac946a7021: Status 404 returned error can't find the container with id 1d31c29c216e09183f799a1201be0a962527f8782edd86fb0cac17ac946a7021 Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048042 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-service-ca\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048108 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048163 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-oauth-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048226 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdwsw\" (UniqueName: \"kubernetes.io/projected/5e6ae51d-3821-446e-9067-fa071506ad47-kube-api-access-hdwsw\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048266 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-oauth-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048293 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-console-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048330 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-trusted-ca-bundle\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.051391 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-service-ca\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.051420 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-console-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.052029 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-trusted-ca-bundle\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.053966 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-oauth-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.054104 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-oauth-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.054850 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.071119 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdwsw\" (UniqueName: \"kubernetes.io/projected/5e6ae51d-3821-446e-9067-fa071506ad47-kube-api-access-hdwsw\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.148456 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs"] Jan 29 15:41:22 crc kubenswrapper[5008]: W0129 15:41:22.160611 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a7e5f12_26c5_4197_81ed_559569651fab.slice/crio-a4c7d48d897d6fb3cf4d945954cf52108dd325acaef59cdc5bd16346ceb455fe WatchSource:0}: Error finding container a4c7d48d897d6fb3cf4d945954cf52108dd325acaef59cdc5bd16346ceb455fe: Status 404 returned error can't find the container with id a4c7d48d897d6fb3cf4d945954cf52108dd325acaef59cdc5bd16346ceb455fe Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.258841 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.335227 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mtz4q"] Jan 29 15:41:22 crc kubenswrapper[5008]: W0129 15:41:22.347621 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5379965a_18ce_41a4_8753_7a70ed4a5efc.slice/crio-1233f27f6c132b984b2ecf59fca5f611a06def688d695fd0a3c1db41fe4e3484 WatchSource:0}: Error finding container 1233f27f6c132b984b2ecf59fca5f611a06def688d695fd0a3c1db41fe4e3484: Status 404 returned error can't find the container with id 1233f27f6c132b984b2ecf59fca5f611a06def688d695fd0a3c1db41fe4e3484 Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.350950 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.359852 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.513893 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7784897869-4b45r"] Jan 29 15:41:22 crc kubenswrapper[5008]: W0129 15:41:22.531544 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e6ae51d_3821_446e_9067_fa071506ad47.slice/crio-3ac9ddbb43eb16fd98940bdcf7043adbadb238e7e9f6657e7fa72aa83946d295 WatchSource:0}: Error finding container 3ac9ddbb43eb16fd98940bdcf7043adbadb238e7e9f6657e7fa72aa83946d295: Status 404 returned error can't find the container with id 3ac9ddbb43eb16fd98940bdcf7043adbadb238e7e9f6657e7fa72aa83946d295 Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.605464 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" event={"ID":"5379965a-18ce-41a4-8753-7a70ed4a5efc","Type":"ContainerStarted","Data":"1233f27f6c132b984b2ecf59fca5f611a06def688d695fd0a3c1db41fe4e3484"} Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.607873 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.608348 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7784897869-4b45r" event={"ID":"5e6ae51d-3821-446e-9067-fa071506ad47","Type":"ContainerStarted","Data":"3ac9ddbb43eb16fd98940bdcf7043adbadb238e7e9f6657e7fa72aa83946d295"} Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.609622 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8hxxx" event={"ID":"beee9730-825d-4a7e-9ef1-d735b1bddd07","Type":"ContainerStarted","Data":"1d31c29c216e09183f799a1201be0a962527f8782edd86fb0cac17ac946a7021"} Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.610902 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" event={"ID":"6a7e5f12-26c5-4197-81ed-559569651fab","Type":"ContainerStarted","Data":"a4c7d48d897d6fb3cf4d945954cf52108dd325acaef59cdc5bd16346ceb455fe"} Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.841455 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47"] Jan 29 15:41:22 crc kubenswrapper[5008]: W0129 15:41:22.858354 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75f20405_b349_4e5f_ba1a_b6bf348766ce.slice/crio-0aeb49a9dea232a8a4a74351828d95aedba893305348b9dede739303fdda53e2 WatchSource:0}: Error finding container 0aeb49a9dea232a8a4a74351828d95aedba893305348b9dede739303fdda53e2: Status 404 returned error can't find the container with id 0aeb49a9dea232a8a4a74351828d95aedba893305348b9dede739303fdda53e2 Jan 29 15:41:23 crc kubenswrapper[5008]: I0129 15:41:23.617221 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7784897869-4b45r" event={"ID":"5e6ae51d-3821-446e-9067-fa071506ad47","Type":"ContainerStarted","Data":"1a27d433a0f71d8244bbffd1ba9aad37a6a4b581856277b05f1dee6abaf8a784"} Jan 29 15:41:23 crc kubenswrapper[5008]: I0129 15:41:23.619772 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" event={"ID":"75f20405-b349-4e5f-ba1a-b6bf348766ce","Type":"ContainerStarted","Data":"0aeb49a9dea232a8a4a74351828d95aedba893305348b9dede739303fdda53e2"} Jan 29 15:41:23 crc kubenswrapper[5008]: I0129 15:41:23.636412 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7784897869-4b45r" podStartSLOduration=2.636394165 podStartE2EDuration="2.636394165s" podCreationTimestamp="2026-01-29 15:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:41:23.632549891 +0000 UTC m=+827.305404128" watchObservedRunningTime="2026-01-29 15:41:23.636394165 +0000 UTC m=+827.309248402" Jan 29 15:41:23 crc kubenswrapper[5008]: I0129 15:41:23.986259 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:24 crc kubenswrapper[5008]: I0129 15:41:24.029072 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:24 crc kubenswrapper[5008]: I0129 15:41:24.213063 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.632029 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8hxxx" event={"ID":"beee9730-825d-4a7e-9ef1-d735b1bddd07","Type":"ContainerStarted","Data":"256527c06d4ff808980cdf28e7090e8f13a3e67879d7867ad01d3e7a3a5b9977"} Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.632594 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.634597 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" event={"ID":"6a7e5f12-26c5-4197-81ed-559569651fab","Type":"ContainerStarted","Data":"57aa5848698dbbb36c1797fc413596c957e4cd0e709452a69362eabb0d85a81e"} Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.634881 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.638855 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" event={"ID":"5379965a-18ce-41a4-8753-7a70ed4a5efc","Type":"ContainerStarted","Data":"ba905c62e61bf16ac634d619ed5b107d2e63c19383f9df4e2387ea322a278500"} Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.639011 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l4krl" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="registry-server" containerID="cri-o://e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a" gracePeriod=2 Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.653173 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8hxxx" podStartSLOduration=2.16896541 podStartE2EDuration="4.6531557s" podCreationTimestamp="2026-01-29 15:41:21 +0000 UTC" firstStartedPulling="2026-01-29 15:41:21.961965645 +0000 UTC m=+825.634819872" lastFinishedPulling="2026-01-29 15:41:24.446155885 +0000 UTC m=+828.119010162" observedRunningTime="2026-01-29 15:41:25.650554868 +0000 UTC m=+829.323409125" watchObservedRunningTime="2026-01-29 15:41:25.6531557 +0000 UTC m=+829.326009947" Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.668266 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" podStartSLOduration=2.382695214 podStartE2EDuration="4.668248766s" podCreationTimestamp="2026-01-29 15:41:21 +0000 UTC" firstStartedPulling="2026-01-29 15:41:22.162420897 +0000 UTC m=+825.835275134" lastFinishedPulling="2026-01-29 15:41:24.447974449 +0000 UTC m=+828.120828686" observedRunningTime="2026-01-29 15:41:25.66553548 +0000 UTC m=+829.338389737" watchObservedRunningTime="2026-01-29 15:41:25.668248766 +0000 UTC m=+829.341103003" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.037589 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.107816 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content\") pod \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.107977 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities\") pod \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.108017 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpmc9\" (UniqueName: \"kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9\") pod \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.109299 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities" (OuterVolumeSpecName: "utilities") pod "800868e4-e114-49d4-a9b4-3ee8fc4ea341" (UID: "800868e4-e114-49d4-a9b4-3ee8fc4ea341"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.112126 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9" (OuterVolumeSpecName: "kube-api-access-mpmc9") pod "800868e4-e114-49d4-a9b4-3ee8fc4ea341" (UID: "800868e4-e114-49d4-a9b4-3ee8fc4ea341"). InnerVolumeSpecName "kube-api-access-mpmc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.209871 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.209921 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpmc9\" (UniqueName: \"kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.263848 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "800868e4-e114-49d4-a9b4-3ee8fc4ea341" (UID: "800868e4-e114-49d4-a9b4-3ee8fc4ea341"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.312155 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.645084 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" event={"ID":"75f20405-b349-4e5f-ba1a-b6bf348766ce","Type":"ContainerStarted","Data":"09b95f8b8c4a1f8411757fded669ac923c7d1fb8c53c82f277add267da6a3f1d"} Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.647680 5008 generic.go:334] "Generic (PLEG): container finished" podID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerID="e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a" exitCode=0 Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.647733 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.647744 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerDied","Data":"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a"} Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.647826 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerDied","Data":"2b7ddcb62ecbf05357c096771ab213c80505ee7aadc4ebe5c0c9a2c9f79dd618"} Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.647852 5008 scope.go:117] "RemoveContainer" containerID="e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.666570 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" podStartSLOduration=3.001872721 podStartE2EDuration="5.66654812s" podCreationTimestamp="2026-01-29 15:41:21 +0000 UTC" firstStartedPulling="2026-01-29 15:41:22.860894953 +0000 UTC m=+826.533749200" lastFinishedPulling="2026-01-29 15:41:25.525570362 +0000 UTC m=+829.198424599" observedRunningTime="2026-01-29 15:41:26.663890235 +0000 UTC m=+830.336744492" watchObservedRunningTime="2026-01-29 15:41:26.66654812 +0000 UTC m=+830.339402377" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.689159 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.695318 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.052450 5008 scope.go:117] "RemoveContainer" containerID="8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.105079 5008 scope.go:117] "RemoveContainer" containerID="cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.137887 5008 scope.go:117] "RemoveContainer" containerID="e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a" Jan 29 15:41:27 crc kubenswrapper[5008]: E0129 15:41:27.138758 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a\": container with ID starting with e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a not found: ID does not exist" containerID="e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.138860 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a"} err="failed to get container status \"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a\": rpc error: code = NotFound desc = could not find container \"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a\": container with ID starting with e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a not found: ID does not exist" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.138903 5008 scope.go:117] "RemoveContainer" containerID="8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4" Jan 29 15:41:27 crc kubenswrapper[5008]: E0129 15:41:27.139426 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4\": container with ID starting with 8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4 not found: ID does not exist" containerID="8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.139492 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4"} err="failed to get container status \"8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4\": rpc error: code = NotFound desc = could not find container \"8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4\": container with ID starting with 8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4 not found: ID does not exist" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.139528 5008 scope.go:117] "RemoveContainer" containerID="cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274" Jan 29 15:41:27 crc kubenswrapper[5008]: E0129 15:41:27.139916 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274\": container with ID starting with cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274 not found: ID does not exist" containerID="cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.139958 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274"} err="failed to get container status \"cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274\": rpc error: code = NotFound desc = could not find container \"cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274\": container with ID starting with cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274 not found: ID does not exist" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.337996 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" path="/var/lib/kubelet/pods/800868e4-e114-49d4-a9b4-3ee8fc4ea341/volumes" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.655036 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" event={"ID":"5379965a-18ce-41a4-8753-7a70ed4a5efc","Type":"ContainerStarted","Data":"c3ca73bc03981863ddf8f1a68738ef20dbc2d8fe1d7193bd6471bc69d2f0c5b7"} Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.690177 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" podStartSLOduration=1.934683419 podStartE2EDuration="6.689980372s" podCreationTimestamp="2026-01-29 15:41:21 +0000 UTC" firstStartedPulling="2026-01-29 15:41:22.351038762 +0000 UTC m=+826.023893039" lastFinishedPulling="2026-01-29 15:41:27.106335745 +0000 UTC m=+830.779189992" observedRunningTime="2026-01-29 15:41:27.676742862 +0000 UTC m=+831.349597119" watchObservedRunningTime="2026-01-29 15:41:27.689980372 +0000 UTC m=+831.362834619" Jan 29 15:41:31 crc kubenswrapper[5008]: I0129 15:41:31.969474 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:32 crc kubenswrapper[5008]: I0129 15:41:32.260015 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:32 crc kubenswrapper[5008]: I0129 15:41:32.260195 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:32 crc kubenswrapper[5008]: I0129 15:41:32.267694 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:32 crc kubenswrapper[5008]: I0129 15:41:32.701835 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:32 crc kubenswrapper[5008]: I0129 15:41:32.795617 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:41:41 crc kubenswrapper[5008]: I0129 15:41:41.903705 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:43 crc kubenswrapper[5008]: I0129 15:41:43.990543 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:41:43 crc kubenswrapper[5008]: I0129 15:41:43.991051 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:41:43 crc kubenswrapper[5008]: I0129 15:41:43.991143 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:41:43 crc kubenswrapper[5008]: I0129 15:41:43.992101 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:41:43 crc kubenswrapper[5008]: I0129 15:41:43.992259 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef" gracePeriod=600 Jan 29 15:41:44 crc kubenswrapper[5008]: I0129 15:41:44.789283 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef" exitCode=0 Jan 29 15:41:44 crc kubenswrapper[5008]: I0129 15:41:44.789422 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef"} Jan 29 15:41:44 crc kubenswrapper[5008]: I0129 15:41:44.790159 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a"} Jan 29 15:41:44 crc kubenswrapper[5008]: I0129 15:41:44.790189 5008 scope.go:117] "RemoveContainer" containerID="9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.458153 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx"] Jan 29 15:41:56 crc kubenswrapper[5008]: E0129 15:41:56.459174 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="registry-server" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.459212 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="registry-server" Jan 29 15:41:56 crc kubenswrapper[5008]: E0129 15:41:56.459241 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="extract-utilities" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.459254 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="extract-utilities" Jan 29 15:41:56 crc kubenswrapper[5008]: E0129 15:41:56.459283 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="extract-content" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.459296 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="extract-content" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.459487 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="registry-server" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.461141 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.463843 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.470984 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx"] Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.546743 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.546914 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw99z\" (UniqueName: \"kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.546988 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.647720 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.647813 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.647877 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw99z\" (UniqueName: \"kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.648665 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.649511 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.676764 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw99z\" (UniqueName: \"kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.796620 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:57 crc kubenswrapper[5008]: I0129 15:41:57.025244 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx"] Jan 29 15:41:57 crc kubenswrapper[5008]: W0129 15:41:57.030978 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod451500d6_673a_42ac_84b5_75d3b9d46998.slice/crio-97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2 WatchSource:0}: Error finding container 97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2: Status 404 returned error can't find the container with id 97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2 Jan 29 15:41:57 crc kubenswrapper[5008]: I0129 15:41:57.848034 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-g2rk6" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" containerID="cri-o://df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5" gracePeriod=15 Jan 29 15:41:57 crc kubenswrapper[5008]: I0129 15:41:57.887625 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" event={"ID":"451500d6-673a-42ac-84b5-75d3b9d46998","Type":"ContainerDied","Data":"c7d76c04dc424b63c19953db970e3e26a1b3ddd5f0a8ed063c0b7d3a54534b5f"} Jan 29 15:41:57 crc kubenswrapper[5008]: I0129 15:41:57.887457 5008 generic.go:334] "Generic (PLEG): container finished" podID="451500d6-673a-42ac-84b5-75d3b9d46998" containerID="c7d76c04dc424b63c19953db970e3e26a1b3ddd5f0a8ed063c0b7d3a54534b5f" exitCode=0 Jan 29 15:41:57 crc kubenswrapper[5008]: I0129 15:41:57.887766 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" event={"ID":"451500d6-673a-42ac-84b5-75d3b9d46998","Type":"ContainerStarted","Data":"97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2"} Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.240982 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g2rk6_3f7de4a5-3819-41c0-9e2e-766dcff408bb/console/0.log" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.241356 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.274743 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.274904 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.274936 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.274964 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pz26\" (UniqueName: \"kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.275963 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.276643 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config" (OuterVolumeSpecName: "console-config") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.276939 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.277411 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.277470 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.277853 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.278041 5008 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.278070 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.278083 5008 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.278396 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca" (OuterVolumeSpecName: "service-ca") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.282305 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.282575 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.284183 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26" (OuterVolumeSpecName: "kube-api-access-4pz26") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "kube-api-access-4pz26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.379536 5008 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.379592 5008 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.379612 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pz26\" (UniqueName: \"kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.379634 5008 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897094 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g2rk6_3f7de4a5-3819-41c0-9e2e-766dcff408bb/console/0.log" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897186 5008 generic.go:334] "Generic (PLEG): container finished" podID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerID="df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5" exitCode=2 Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897237 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2rk6" event={"ID":"3f7de4a5-3819-41c0-9e2e-766dcff408bb","Type":"ContainerDied","Data":"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5"} Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897298 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2rk6" event={"ID":"3f7de4a5-3819-41c0-9e2e-766dcff408bb","Type":"ContainerDied","Data":"0d50d0b75f6e0f8a4026a940843934088791e81f1a0bc633f602d35cd43598eb"} Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897319 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897332 5008 scope.go:117] "RemoveContainer" containerID="df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.926953 5008 scope.go:117] "RemoveContainer" containerID="df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5" Jan 29 15:41:58 crc kubenswrapper[5008]: E0129 15:41:58.927622 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5\": container with ID starting with df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5 not found: ID does not exist" containerID="df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.927684 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5"} err="failed to get container status \"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5\": rpc error: code = NotFound desc = could not find container \"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5\": container with ID starting with df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5 not found: ID does not exist" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.947914 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.954557 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:41:59 crc kubenswrapper[5008]: I0129 15:41:59.332361 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" path="/var/lib/kubelet/pods/3f7de4a5-3819-41c0-9e2e-766dcff408bb/volumes" Jan 29 15:42:00 crc kubenswrapper[5008]: I0129 15:42:00.915398 5008 generic.go:334] "Generic (PLEG): container finished" podID="451500d6-673a-42ac-84b5-75d3b9d46998" containerID="9b68156a7941132fb9e50897803f7be82cd15c7a699bcc0fb1a329ae9ae48b4f" exitCode=0 Jan 29 15:42:00 crc kubenswrapper[5008]: I0129 15:42:00.915466 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" event={"ID":"451500d6-673a-42ac-84b5-75d3b9d46998","Type":"ContainerDied","Data":"9b68156a7941132fb9e50897803f7be82cd15c7a699bcc0fb1a329ae9ae48b4f"} Jan 29 15:42:01 crc kubenswrapper[5008]: I0129 15:42:01.926689 5008 generic.go:334] "Generic (PLEG): container finished" podID="451500d6-673a-42ac-84b5-75d3b9d46998" containerID="877b0ce4c3dc2404fe743931e1c40d52b8b08e0ded6fb97f08fca18d660def06" exitCode=0 Jan 29 15:42:01 crc kubenswrapper[5008]: I0129 15:42:01.926768 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" event={"ID":"451500d6-673a-42ac-84b5-75d3b9d46998","Type":"ContainerDied","Data":"877b0ce4c3dc2404fe743931e1c40d52b8b08e0ded6fb97f08fca18d660def06"} Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.254150 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.445951 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw99z\" (UniqueName: \"kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z\") pod \"451500d6-673a-42ac-84b5-75d3b9d46998\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.446145 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle\") pod \"451500d6-673a-42ac-84b5-75d3b9d46998\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.446229 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util\") pod \"451500d6-673a-42ac-84b5-75d3b9d46998\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.447136 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle" (OuterVolumeSpecName: "bundle") pod "451500d6-673a-42ac-84b5-75d3b9d46998" (UID: "451500d6-673a-42ac-84b5-75d3b9d46998"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.457031 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z" (OuterVolumeSpecName: "kube-api-access-cw99z") pod "451500d6-673a-42ac-84b5-75d3b9d46998" (UID: "451500d6-673a-42ac-84b5-75d3b9d46998"). InnerVolumeSpecName "kube-api-access-cw99z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.470458 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util" (OuterVolumeSpecName: "util") pod "451500d6-673a-42ac-84b5-75d3b9d46998" (UID: "451500d6-673a-42ac-84b5-75d3b9d46998"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.548249 5008 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.548344 5008 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.548365 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw99z\" (UniqueName: \"kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z\") on node \"crc\" DevicePath \"\"" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.940996 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" event={"ID":"451500d6-673a-42ac-84b5-75d3b9d46998","Type":"ContainerDied","Data":"97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2"} Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.941033 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.941094 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.677869 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8644cb7465-xww64"] Jan 29 15:42:11 crc kubenswrapper[5008]: E0129 15:42:11.678304 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="extract" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678315 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="extract" Jan 29 15:42:11 crc kubenswrapper[5008]: E0129 15:42:11.678328 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678334 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" Jan 29 15:42:11 crc kubenswrapper[5008]: E0129 15:42:11.678349 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="util" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678356 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="util" Jan 29 15:42:11 crc kubenswrapper[5008]: E0129 15:42:11.678365 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="pull" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678371 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="pull" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678458 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678466 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="extract" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678836 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.681367 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.681571 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.681658 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.682286 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-vzd7g" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.682976 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.708462 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8644cb7465-xww64"] Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.744733 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-544bf\" (UniqueName: \"kubernetes.io/projected/65797f8d-98da-4cbc-a7df-cd6d00fda635-kube-api-access-544bf\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.744913 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-webhook-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.744957 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-apiservice-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.845713 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-544bf\" (UniqueName: \"kubernetes.io/projected/65797f8d-98da-4cbc-a7df-cd6d00fda635-kube-api-access-544bf\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.845862 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-webhook-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.845905 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-apiservice-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.863017 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-apiservice-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.865288 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-webhook-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.867920 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-544bf\" (UniqueName: \"kubernetes.io/projected/65797f8d-98da-4cbc-a7df-cd6d00fda635-kube-api-access-544bf\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.933078 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9"] Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.933707 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.936361 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.936381 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.936972 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tb9k6" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.946865 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dv4q\" (UniqueName: \"kubernetes.io/projected/42235713-405f-4dc1-9e60-3b1615ec49a2-kube-api-access-7dv4q\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.946903 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-apiservice-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.946935 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-webhook-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.953588 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9"] Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.992288 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.048086 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dv4q\" (UniqueName: \"kubernetes.io/projected/42235713-405f-4dc1-9e60-3b1615ec49a2-kube-api-access-7dv4q\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.048130 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-apiservice-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.048162 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-webhook-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.052624 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-webhook-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.052628 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-apiservice-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.070614 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dv4q\" (UniqueName: \"kubernetes.io/projected/42235713-405f-4dc1-9e60-3b1615ec49a2-kube-api-access-7dv4q\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.250520 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.446610 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8644cb7465-xww64"] Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.687034 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9"] Jan 29 15:42:12 crc kubenswrapper[5008]: W0129 15:42:12.688667 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42235713_405f_4dc1_9e60_3b1615ec49a2.slice/crio-00778d443446d4228b3684740cab94c02807024a47b2307dbbd66897b8f2c40b WatchSource:0}: Error finding container 00778d443446d4228b3684740cab94c02807024a47b2307dbbd66897b8f2c40b: Status 404 returned error can't find the container with id 00778d443446d4228b3684740cab94c02807024a47b2307dbbd66897b8f2c40b Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.988323 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" event={"ID":"42235713-405f-4dc1-9e60-3b1615ec49a2","Type":"ContainerStarted","Data":"00778d443446d4228b3684740cab94c02807024a47b2307dbbd66897b8f2c40b"} Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.989303 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" event={"ID":"65797f8d-98da-4cbc-a7df-cd6d00fda635","Type":"ContainerStarted","Data":"992a9109d9f61c2dffc8a568ce4f4d2ef6f3e1496f092aea367052b4a5d0bc40"} Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.021433 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" event={"ID":"65797f8d-98da-4cbc-a7df-cd6d00fda635","Type":"ContainerStarted","Data":"dd6ecf4d9c9d10631c17c9542fcc50fc6daee52b69dd97e0eb43fdf0fecf228d"} Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.022188 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.034033 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" event={"ID":"42235713-405f-4dc1-9e60-3b1615ec49a2","Type":"ContainerStarted","Data":"4bd0ea5c3666d117001376d771c4539d9a45028b6d8c8333357dda28aeb1d5b9"} Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.034410 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.072478 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" podStartSLOduration=2.439297115 podStartE2EDuration="7.072457309s" podCreationTimestamp="2026-01-29 15:42:11 +0000 UTC" firstStartedPulling="2026-01-29 15:42:12.453599401 +0000 UTC m=+876.126453638" lastFinishedPulling="2026-01-29 15:42:17.086759605 +0000 UTC m=+880.759613832" observedRunningTime="2026-01-29 15:42:18.066011782 +0000 UTC m=+881.738866019" watchObservedRunningTime="2026-01-29 15:42:18.072457309 +0000 UTC m=+881.745311566" Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.091825 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" podStartSLOduration=2.670606008 podStartE2EDuration="7.091805409s" podCreationTimestamp="2026-01-29 15:42:11 +0000 UTC" firstStartedPulling="2026-01-29 15:42:12.691291258 +0000 UTC m=+876.364145495" lastFinishedPulling="2026-01-29 15:42:17.112490659 +0000 UTC m=+880.785344896" observedRunningTime="2026-01-29 15:42:18.087348241 +0000 UTC m=+881.760202508" watchObservedRunningTime="2026-01-29 15:42:18.091805409 +0000 UTC m=+881.764659676" Jan 29 15:42:32 crc kubenswrapper[5008]: I0129 15:42:32.266479 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:51 crc kubenswrapper[5008]: I0129 15:42:51.995663 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.699419 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-95tm6"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.701551 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.703576 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.703596 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.704043 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-z8dgm" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.706591 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.707270 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.708623 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.726612 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.784366 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-dmtw7"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.785178 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.787717 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.787917 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.787766 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.787840 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bf7md" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.813127 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-bzslg"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.813914 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.816115 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818340 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-sockets\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818366 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818392 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m887\" (UniqueName: \"kubernetes.io/projected/17fc1fa7-5758-4768-a6f5-5b63b63d0948-kube-api-access-2m887\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818439 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-reloader\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818455 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlm6\" (UniqueName: \"kubernetes.io/projected/8927915f-8333-415c-82e1-47d948a6e8ad-kube-api-access-lwlm6\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818470 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818483 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j8d5\" (UniqueName: \"kubernetes.io/projected/88b3b62b-8ee9-4541-a109-c52f195f55c2-kube-api-access-2j8d5\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818514 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818531 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818547 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7wn\" (UniqueName: \"kubernetes.io/projected/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-kube-api-access-rl7wn\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818564 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-cert\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818577 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-conf\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818598 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-startup\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818621 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-metrics-certs\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818635 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8927915f-8333-415c-82e1-47d948a6e8ad-metallb-excludel2\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818658 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics-certs\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.824004 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bzslg"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.920199 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-reloader\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.920554 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwlm6\" (UniqueName: \"kubernetes.io/projected/8927915f-8333-415c-82e1-47d948a6e8ad-kube-api-access-lwlm6\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.920656 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.920761 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j8d5\" (UniqueName: \"kubernetes.io/projected/88b3b62b-8ee9-4541-a109-c52f195f55c2-kube-api-access-2j8d5\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.920911 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921021 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl7wn\" (UniqueName: \"kubernetes.io/projected/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-kube-api-access-rl7wn\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921111 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921201 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-cert\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921304 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-conf\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921402 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-startup\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: E0129 15:42:52.920841 5008 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921571 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-metrics-certs\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: E0129 15:42:52.921610 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist podName:8927915f-8333-415c-82e1-47d948a6e8ad nodeName:}" failed. No retries permitted until 2026-01-29 15:42:53.421569099 +0000 UTC m=+917.094423386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist") pod "speaker-dmtw7" (UID: "8927915f-8333-415c-82e1-47d948a6e8ad") : secret "metallb-memberlist" not found Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921749 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8927915f-8333-415c-82e1-47d948a6e8ad-metallb-excludel2\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921883 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics-certs\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922011 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-sockets\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921653 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-reloader\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-conf\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: E0129 15:42:52.921029 5008 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 15:42:52 crc kubenswrapper[5008]: E0129 15:42:52.922274 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs podName:8927915f-8333-415c-82e1-47d948a6e8ad nodeName:}" failed. No retries permitted until 2026-01-29 15:42:53.422250815 +0000 UTC m=+917.095105052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs") pod "speaker-dmtw7" (UID: "8927915f-8333-415c-82e1-47d948a6e8ad") : secret "speaker-certs-secret" not found Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922347 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922413 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-sockets\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922540 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m887\" (UniqueName: \"kubernetes.io/projected/17fc1fa7-5758-4768-a6f5-5b63b63d0948-kube-api-access-2m887\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922591 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922931 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8927915f-8333-415c-82e1-47d948a6e8ad-metallb-excludel2\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.923064 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-startup\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.924132 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.927632 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics-certs\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.934285 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-cert\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.934716 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-metrics-certs\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.937245 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.939898 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl7wn\" (UniqueName: \"kubernetes.io/projected/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-kube-api-access-rl7wn\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.940832 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m887\" (UniqueName: \"kubernetes.io/projected/17fc1fa7-5758-4768-a6f5-5b63b63d0948-kube-api-access-2m887\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.946384 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwlm6\" (UniqueName: \"kubernetes.io/projected/8927915f-8333-415c-82e1-47d948a6e8ad-kube-api-access-lwlm6\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.956127 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j8d5\" (UniqueName: \"kubernetes.io/projected/88b3b62b-8ee9-4541-a109-c52f195f55c2-kube-api-access-2j8d5\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.021179 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.025481 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.127005 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.265542 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"a9ec40b668ff751f03ae83da8d6adacd7ae1faaf5fa0aa62bef4608b1c387853"} Jan 29 15:42:53 crc kubenswrapper[5008]: E0129 15:42:53.342506 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:42:53 crc kubenswrapper[5008]: E0129 15:42:53.342666 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:cp-frr-files,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/bin/sh -c cp -rLf /tmp/frr/* /etc/frr/],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:frr-startup,ReadOnly:false,MountPath:/tmp/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:frr-conf,ReadOnly:false,MountPath:/etc/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2m887,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*100,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*101,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-95tm6_metallb-system(17fc1fa7-5758-4768-a6f5-5b63b63d0948): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:42:53 crc kubenswrapper[5008]: E0129 15:42:53.345878 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-95tm6" podUID="17fc1fa7-5758-4768-a6f5-5b63b63d0948" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.350325 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bzslg"] Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.428018 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.428305 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:53 crc kubenswrapper[5008]: E0129 15:42:53.428208 5008 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 15:42:53 crc kubenswrapper[5008]: E0129 15:42:53.428416 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist podName:8927915f-8333-415c-82e1-47d948a6e8ad nodeName:}" failed. No retries permitted until 2026-01-29 15:42:54.428392915 +0000 UTC m=+918.101247142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist") pod "speaker-dmtw7" (UID: "8927915f-8333-415c-82e1-47d948a6e8ad") : secret "metallb-memberlist" not found Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.440683 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.461507 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6"] Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.274878 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bzslg" event={"ID":"88b3b62b-8ee9-4541-a109-c52f195f55c2","Type":"ContainerStarted","Data":"9e8c669b0c62eb6a9f8048e95d9ff90c082f08ad0dad0416ed48e496b71ccd6a"} Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.275172 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.275187 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bzslg" event={"ID":"88b3b62b-8ee9-4541-a109-c52f195f55c2","Type":"ContainerStarted","Data":"f3f3760040ccd43614b9a8bebd2fa4142c416d8f85600f954ab9e93d30f25e99"} Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.275198 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bzslg" event={"ID":"88b3b62b-8ee9-4541-a109-c52f195f55c2","Type":"ContainerStarted","Data":"6bc269d2a8131b3e266a6aed301e6f1c63c90be6b88abca8e9c021c385871d0f"} Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.276370 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" event={"ID":"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07","Type":"ContainerStarted","Data":"3e79b74cb5e6188efd8f04e4c6248c27fc27d02321f61d7b535be2c547e6371e"} Jan 29 15:42:54 crc kubenswrapper[5008]: E0129 15:42:54.278605 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-95tm6" podUID="17fc1fa7-5758-4768-a6f5-5b63b63d0948" Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.293066 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-bzslg" podStartSLOduration=2.293046682 podStartE2EDuration="2.293046682s" podCreationTimestamp="2026-01-29 15:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:42:54.291435623 +0000 UTC m=+917.964289870" watchObservedRunningTime="2026-01-29 15:42:54.293046682 +0000 UTC m=+917.965900929" Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.442049 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.450249 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.598027 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dmtw7" Jan 29 15:42:54 crc kubenswrapper[5008]: W0129 15:42:54.618751 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8927915f_8333_415c_82e1_47d948a6e8ad.slice/crio-604ca4fc3f3b7d4dec84a70104e0463f36a66f636b5d1a782efadc478b5cd653 WatchSource:0}: Error finding container 604ca4fc3f3b7d4dec84a70104e0463f36a66f636b5d1a782efadc478b5cd653: Status 404 returned error can't find the container with id 604ca4fc3f3b7d4dec84a70104e0463f36a66f636b5d1a782efadc478b5cd653 Jan 29 15:42:55 crc kubenswrapper[5008]: I0129 15:42:55.282895 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dmtw7" event={"ID":"8927915f-8333-415c-82e1-47d948a6e8ad","Type":"ContainerStarted","Data":"1d5fc6e003dc2d03f9c011bb9895ba308880490ea798b98530077ad885d16c7a"} Jan 29 15:42:55 crc kubenswrapper[5008]: I0129 15:42:55.282958 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dmtw7" event={"ID":"8927915f-8333-415c-82e1-47d948a6e8ad","Type":"ContainerStarted","Data":"a281febc92271ed4741cfa48b172c504b779aedf1063a00d42e14f3869ebae6f"} Jan 29 15:42:55 crc kubenswrapper[5008]: I0129 15:42:55.282971 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dmtw7" event={"ID":"8927915f-8333-415c-82e1-47d948a6e8ad","Type":"ContainerStarted","Data":"604ca4fc3f3b7d4dec84a70104e0463f36a66f636b5d1a782efadc478b5cd653"} Jan 29 15:42:55 crc kubenswrapper[5008]: I0129 15:42:55.283175 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-dmtw7" Jan 29 15:42:55 crc kubenswrapper[5008]: I0129 15:42:55.301904 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-dmtw7" podStartSLOduration=3.301886238 podStartE2EDuration="3.301886238s" podCreationTimestamp="2026-01-29 15:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:42:55.301203521 +0000 UTC m=+918.974057768" watchObservedRunningTime="2026-01-29 15:42:55.301886238 +0000 UTC m=+918.974740475" Jan 29 15:43:00 crc kubenswrapper[5008]: I0129 15:43:00.317406 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" event={"ID":"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07","Type":"ContainerStarted","Data":"1ea2b5bbf48ed8cfd5ae2cdc50e9ac14dd77005e271708a9da6c7fee15f9e08a"} Jan 29 15:43:01 crc kubenswrapper[5008]: I0129 15:43:01.331399 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:43:01 crc kubenswrapper[5008]: I0129 15:43:01.345552 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" podStartSLOduration=2.680539301 podStartE2EDuration="9.345523091s" podCreationTimestamp="2026-01-29 15:42:52 +0000 UTC" firstStartedPulling="2026-01-29 15:42:53.465595527 +0000 UTC m=+917.138449764" lastFinishedPulling="2026-01-29 15:43:00.130579287 +0000 UTC m=+923.803433554" observedRunningTime="2026-01-29 15:43:01.339920155 +0000 UTC m=+925.012774452" watchObservedRunningTime="2026-01-29 15:43:01.345523091 +0000 UTC m=+925.018377338" Jan 29 15:43:03 crc kubenswrapper[5008]: I0129 15:43:03.134989 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:43:04 crc kubenswrapper[5008]: I0129 15:43:04.602384 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-dmtw7" Jan 29 15:43:06 crc kubenswrapper[5008]: I0129 15:43:06.356655 5008 generic.go:334] "Generic (PLEG): container finished" podID="17fc1fa7-5758-4768-a6f5-5b63b63d0948" containerID="231e5de5485ddfe294e1d3a81c8d79d122a0084f5cf9936952c289b36c0a733d" exitCode=0 Jan 29 15:43:06 crc kubenswrapper[5008]: I0129 15:43:06.356773 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerDied","Data":"231e5de5485ddfe294e1d3a81c8d79d122a0084f5cf9936952c289b36c0a733d"} Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.368193 5008 generic.go:334] "Generic (PLEG): container finished" podID="17fc1fa7-5758-4768-a6f5-5b63b63d0948" containerID="33720197a14eea329aa19313ef67e6121dfb318eeb5363d329c2f32b75b0e16e" exitCode=0 Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.368295 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerDied","Data":"33720197a14eea329aa19313ef67e6121dfb318eeb5363d329c2f32b75b0e16e"} Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.585188 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.586043 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.588679 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.588936 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.590541 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fpjp4" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.608152 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.731163 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mzcl\" (UniqueName: \"kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl\") pod \"openstack-operator-index-bvg8g\" (UID: \"216a7f22-8b15-4532-a345-2a9da518679f\") " pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.833066 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mzcl\" (UniqueName: \"kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl\") pod \"openstack-operator-index-bvg8g\" (UID: \"216a7f22-8b15-4532-a345-2a9da518679f\") " pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.851905 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mzcl\" (UniqueName: \"kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl\") pod \"openstack-operator-index-bvg8g\" (UID: \"216a7f22-8b15-4532-a345-2a9da518679f\") " pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.905078 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:08 crc kubenswrapper[5008]: I0129 15:43:08.130817 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:08 crc kubenswrapper[5008]: W0129 15:43:08.158920 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod216a7f22_8b15_4532_a345_2a9da518679f.slice/crio-a08f2d41444c3b33931f6fccecf3e8b61a7338461bd1d84edb3bcbd5755fa677 WatchSource:0}: Error finding container a08f2d41444c3b33931f6fccecf3e8b61a7338461bd1d84edb3bcbd5755fa677: Status 404 returned error can't find the container with id a08f2d41444c3b33931f6fccecf3e8b61a7338461bd1d84edb3bcbd5755fa677 Jan 29 15:43:08 crc kubenswrapper[5008]: I0129 15:43:08.377531 5008 generic.go:334] "Generic (PLEG): container finished" podID="17fc1fa7-5758-4768-a6f5-5b63b63d0948" containerID="3170e1b36932438726b302f82a0fbce3307979c9fc880212c37283db916ec3a6" exitCode=0 Jan 29 15:43:08 crc kubenswrapper[5008]: I0129 15:43:08.377565 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerDied","Data":"3170e1b36932438726b302f82a0fbce3307979c9fc880212c37283db916ec3a6"} Jan 29 15:43:08 crc kubenswrapper[5008]: I0129 15:43:08.378354 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvg8g" event={"ID":"216a7f22-8b15-4532-a345-2a9da518679f","Type":"ContainerStarted","Data":"a08f2d41444c3b33931f6fccecf3e8b61a7338461bd1d84edb3bcbd5755fa677"} Jan 29 15:43:09 crc kubenswrapper[5008]: I0129 15:43:09.386226 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"98b69b910e0313ca12d3067fb699555c3f870f775e6b1814a716e32c11f4b945"} Jan 29 15:43:09 crc kubenswrapper[5008]: I0129 15:43:09.386612 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"7d461416194708ba876b149d24894e85b55fbf637b48290d0123e59d20667a8e"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.344454 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.408281 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"0fb84983516a5b2a40325e8a28b98266055c9d6dbb4f687f6c8c24306ba50dff"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.408341 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"17a10cc56ce1eb6b41fb2a54a58710f8ba75c5fadd9bdbbb9452e25e3550c7c2"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.408364 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"d115d747aeb4d5a4087cba5a82125329a21f3ead612d7073181e40ee486b435f"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.408383 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"3716342af50a7804afbe37daeb3c0fb1382c3d99c797eb28bb8f228a26d9fa27"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.408410 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.410001 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvg8g" event={"ID":"216a7f22-8b15-4532-a345-2a9da518679f","Type":"ContainerStarted","Data":"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.455210 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-95tm6" podStartSLOduration=-9223372017.399584 podStartE2EDuration="19.455192762s" podCreationTimestamp="2026-01-29 15:42:52 +0000 UTC" firstStartedPulling="2026-01-29 15:42:53.205477247 +0000 UTC m=+916.878331484" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:43:11.438124448 +0000 UTC m=+935.110978725" watchObservedRunningTime="2026-01-29 15:43:11.455192762 +0000 UTC m=+935.128046999" Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.951177 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bvg8g" podStartSLOduration=2.687736361 podStartE2EDuration="4.951151114s" podCreationTimestamp="2026-01-29 15:43:07 +0000 UTC" firstStartedPulling="2026-01-29 15:43:08.175576065 +0000 UTC m=+931.848430302" lastFinishedPulling="2026-01-29 15:43:10.438990818 +0000 UTC m=+934.111845055" observedRunningTime="2026-01-29 15:43:11.454324191 +0000 UTC m=+935.127178438" watchObservedRunningTime="2026-01-29 15:43:11.951151114 +0000 UTC m=+935.624005391" Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.955441 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lv8km"] Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.956888 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.961297 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lv8km"] Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.107388 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsbwg\" (UniqueName: \"kubernetes.io/projected/cdce8b7e-15b6-41ae-89f3-fd69472b9800-kube-api-access-bsbwg\") pod \"openstack-operator-index-lv8km\" (UID: \"cdce8b7e-15b6-41ae-89f3-fd69472b9800\") " pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.208294 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsbwg\" (UniqueName: \"kubernetes.io/projected/cdce8b7e-15b6-41ae-89f3-fd69472b9800-kube-api-access-bsbwg\") pod \"openstack-operator-index-lv8km\" (UID: \"cdce8b7e-15b6-41ae-89f3-fd69472b9800\") " pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.229842 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsbwg\" (UniqueName: \"kubernetes.io/projected/cdce8b7e-15b6-41ae-89f3-fd69472b9800-kube-api-access-bsbwg\") pod \"openstack-operator-index-lv8km\" (UID: \"cdce8b7e-15b6-41ae-89f3-fd69472b9800\") " pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.306935 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.418038 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bvg8g" podUID="216a7f22-8b15-4532-a345-2a9da518679f" containerName="registry-server" containerID="cri-o://e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a" gracePeriod=2 Jan 29 15:43:12 crc kubenswrapper[5008]: W0129 15:43:12.534081 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdce8b7e_15b6_41ae_89f3_fd69472b9800.slice/crio-847014a54133b189fcdb609d1fca489b903dc90f362c801f4a21f00423a709a0 WatchSource:0}: Error finding container 847014a54133b189fcdb609d1fca489b903dc90f362c801f4a21f00423a709a0: Status 404 returned error can't find the container with id 847014a54133b189fcdb609d1fca489b903dc90f362c801f4a21f00423a709a0 Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.547045 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lv8km"] Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.927714 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.019614 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mzcl\" (UniqueName: \"kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl\") pod \"216a7f22-8b15-4532-a345-2a9da518679f\" (UID: \"216a7f22-8b15-4532-a345-2a9da518679f\") " Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.021615 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.025811 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl" (OuterVolumeSpecName: "kube-api-access-6mzcl") pod "216a7f22-8b15-4532-a345-2a9da518679f" (UID: "216a7f22-8b15-4532-a345-2a9da518679f"). InnerVolumeSpecName "kube-api-access-6mzcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.031385 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.064410 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.121468 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mzcl\" (UniqueName: \"kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl\") on node \"crc\" DevicePath \"\"" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.428040 5008 generic.go:334] "Generic (PLEG): container finished" podID="216a7f22-8b15-4532-a345-2a9da518679f" containerID="e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a" exitCode=0 Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.428090 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.428111 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvg8g" event={"ID":"216a7f22-8b15-4532-a345-2a9da518679f","Type":"ContainerDied","Data":"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a"} Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.428628 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvg8g" event={"ID":"216a7f22-8b15-4532-a345-2a9da518679f","Type":"ContainerDied","Data":"a08f2d41444c3b33931f6fccecf3e8b61a7338461bd1d84edb3bcbd5755fa677"} Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.428677 5008 scope.go:117] "RemoveContainer" containerID="e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.432174 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lv8km" event={"ID":"cdce8b7e-15b6-41ae-89f3-fd69472b9800","Type":"ContainerStarted","Data":"f6b9e7ec67196089535f435273040d6e56dbf49a92f92705d0160a9c45780f32"} Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.432508 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lv8km" event={"ID":"cdce8b7e-15b6-41ae-89f3-fd69472b9800","Type":"ContainerStarted","Data":"847014a54133b189fcdb609d1fca489b903dc90f362c801f4a21f00423a709a0"} Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.462409 5008 scope.go:117] "RemoveContainer" containerID="e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a" Jan 29 15:43:13 crc kubenswrapper[5008]: E0129 15:43:13.463374 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a\": container with ID starting with e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a not found: ID does not exist" containerID="e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.463445 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a"} err="failed to get container status \"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a\": rpc error: code = NotFound desc = could not find container \"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a\": container with ID starting with e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a not found: ID does not exist" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.468434 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lv8km" podStartSLOduration=2.313658152 podStartE2EDuration="2.468396505s" podCreationTimestamp="2026-01-29 15:43:11 +0000 UTC" firstStartedPulling="2026-01-29 15:43:12.53624855 +0000 UTC m=+936.209102777" lastFinishedPulling="2026-01-29 15:43:12.690986863 +0000 UTC m=+936.363841130" observedRunningTime="2026-01-29 15:43:13.448503921 +0000 UTC m=+937.121358258" watchObservedRunningTime="2026-01-29 15:43:13.468396505 +0000 UTC m=+937.141250792" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.515275 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.524421 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:15 crc kubenswrapper[5008]: I0129 15:43:15.339681 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216a7f22-8b15-4532-a345-2a9da518679f" path="/var/lib/kubelet/pods/216a7f22-8b15-4532-a345-2a9da518679f/volumes" Jan 29 15:43:22 crc kubenswrapper[5008]: I0129 15:43:22.307382 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:22 crc kubenswrapper[5008]: I0129 15:43:22.307959 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:22 crc kubenswrapper[5008]: I0129 15:43:22.342908 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:22 crc kubenswrapper[5008]: I0129 15:43:22.535588 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:23 crc kubenswrapper[5008]: I0129 15:43:23.028489 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.207098 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg"] Jan 29 15:43:28 crc kubenswrapper[5008]: E0129 15:43:28.208062 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216a7f22-8b15-4532-a345-2a9da518679f" containerName="registry-server" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.208096 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="216a7f22-8b15-4532-a345-2a9da518679f" containerName="registry-server" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.208449 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="216a7f22-8b15-4532-a345-2a9da518679f" containerName="registry-server" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.210280 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.216730 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg"] Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.254612 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-c8v8v" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.357667 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blg86\" (UniqueName: \"kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.357839 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.357884 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.458716 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blg86\" (UniqueName: \"kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.459502 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.459691 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.460564 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.460687 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.494700 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blg86\" (UniqueName: \"kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.570585 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:29 crc kubenswrapper[5008]: I0129 15:43:29.014229 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg"] Jan 29 15:43:29 crc kubenswrapper[5008]: I0129 15:43:29.556578 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" event={"ID":"dcbfd66c-b06c-432d-b8e8-a222ab00f36c","Type":"ContainerStarted","Data":"9239be16dbd1f7777729c08e9496dfa060494c0ee5947936ff5b5779c265a6ce"} Jan 29 15:43:44 crc kubenswrapper[5008]: I0129 15:43:44.680656 5008 generic.go:334] "Generic (PLEG): container finished" podID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerID="7d5c903e1f3ba0cea1d15fc05195cd1530a6583eb5b944d7caab6f6c2c55dd45" exitCode=0 Jan 29 15:43:44 crc kubenswrapper[5008]: I0129 15:43:44.680749 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" event={"ID":"dcbfd66c-b06c-432d-b8e8-a222ab00f36c","Type":"ContainerDied","Data":"7d5c903e1f3ba0cea1d15fc05195cd1530a6583eb5b944d7caab6f6c2c55dd45"} Jan 29 15:43:46 crc kubenswrapper[5008]: I0129 15:43:46.697066 5008 generic.go:334] "Generic (PLEG): container finished" podID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerID="4f414b7bdd9adf458c9ee14e38eea93ce3d8efca98d0436bf706ddef3cca134b" exitCode=0 Jan 29 15:43:46 crc kubenswrapper[5008]: I0129 15:43:46.697147 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" event={"ID":"dcbfd66c-b06c-432d-b8e8-a222ab00f36c","Type":"ContainerDied","Data":"4f414b7bdd9adf458c9ee14e38eea93ce3d8efca98d0436bf706ddef3cca134b"} Jan 29 15:43:47 crc kubenswrapper[5008]: I0129 15:43:47.707151 5008 generic.go:334] "Generic (PLEG): container finished" podID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerID="742ad74ea9369a7369ccd691f0019c383e12bb9305929becea41099d2763e1d2" exitCode=0 Jan 29 15:43:47 crc kubenswrapper[5008]: I0129 15:43:47.707202 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" event={"ID":"dcbfd66c-b06c-432d-b8e8-a222ab00f36c","Type":"ContainerDied","Data":"742ad74ea9369a7369ccd691f0019c383e12bb9305929becea41099d2763e1d2"} Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.015668 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.164331 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blg86\" (UniqueName: \"kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86\") pod \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.164447 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle\") pod \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.164638 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util\") pod \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.165628 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle" (OuterVolumeSpecName: "bundle") pod "dcbfd66c-b06c-432d-b8e8-a222ab00f36c" (UID: "dcbfd66c-b06c-432d-b8e8-a222ab00f36c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.174141 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86" (OuterVolumeSpecName: "kube-api-access-blg86") pod "dcbfd66c-b06c-432d-b8e8-a222ab00f36c" (UID: "dcbfd66c-b06c-432d-b8e8-a222ab00f36c"). InnerVolumeSpecName "kube-api-access-blg86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.178822 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util" (OuterVolumeSpecName: "util") pod "dcbfd66c-b06c-432d-b8e8-a222ab00f36c" (UID: "dcbfd66c-b06c-432d-b8e8-a222ab00f36c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.266229 5008 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.266271 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blg86\" (UniqueName: \"kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86\") on node \"crc\" DevicePath \"\"" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.266286 5008 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.722024 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" event={"ID":"dcbfd66c-b06c-432d-b8e8-a222ab00f36c","Type":"ContainerDied","Data":"9239be16dbd1f7777729c08e9496dfa060494c0ee5947936ff5b5779c265a6ce"} Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.722332 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9239be16dbd1f7777729c08e9496dfa060494c0ee5947936ff5b5779c265a6ce" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.722078 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.304068 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn"] Jan 29 15:43:55 crc kubenswrapper[5008]: E0129 15:43:55.304981 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="extract" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.305001 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="extract" Jan 29 15:43:55 crc kubenswrapper[5008]: E0129 15:43:55.305022 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="util" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.305035 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="util" Jan 29 15:43:55 crc kubenswrapper[5008]: E0129 15:43:55.305067 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="pull" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.305081 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="pull" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.305280 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="extract" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.305968 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.308449 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-cf8hf" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.374058 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn"] Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.454339 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqj84\" (UniqueName: \"kubernetes.io/projected/9edb96c4-66c6-464b-8dd3-089d6be05a60-kube-api-access-gqj84\") pod \"openstack-operator-controller-init-6d9fb954d-qlkhn\" (UID: \"9edb96c4-66c6-464b-8dd3-089d6be05a60\") " pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.556893 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqj84\" (UniqueName: \"kubernetes.io/projected/9edb96c4-66c6-464b-8dd3-089d6be05a60-kube-api-access-gqj84\") pod \"openstack-operator-controller-init-6d9fb954d-qlkhn\" (UID: \"9edb96c4-66c6-464b-8dd3-089d6be05a60\") " pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.584346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqj84\" (UniqueName: \"kubernetes.io/projected/9edb96c4-66c6-464b-8dd3-089d6be05a60-kube-api-access-gqj84\") pod \"openstack-operator-controller-init-6d9fb954d-qlkhn\" (UID: \"9edb96c4-66c6-464b-8dd3-089d6be05a60\") " pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.623931 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.850933 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn"] Jan 29 15:43:56 crc kubenswrapper[5008]: I0129 15:43:56.769685 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" event={"ID":"9edb96c4-66c6-464b-8dd3-089d6be05a60","Type":"ContainerStarted","Data":"099b0885a305e83598fe4797ef06fe7fa0590be780598e1ffff24e9dbc8124fa"} Jan 29 15:44:01 crc kubenswrapper[5008]: I0129 15:44:01.805320 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" event={"ID":"9edb96c4-66c6-464b-8dd3-089d6be05a60","Type":"ContainerStarted","Data":"907e8712b6d25dae39109b258304e1241c2e97daa46e05e90720eaf5f5d23ea8"} Jan 29 15:44:01 crc kubenswrapper[5008]: I0129 15:44:01.805907 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:44:01 crc kubenswrapper[5008]: I0129 15:44:01.844070 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" podStartSLOduration=1.993684027 podStartE2EDuration="6.844047739s" podCreationTimestamp="2026-01-29 15:43:55 +0000 UTC" firstStartedPulling="2026-01-29 15:43:55.861241147 +0000 UTC m=+979.534095394" lastFinishedPulling="2026-01-29 15:44:00.711604859 +0000 UTC m=+984.384459106" observedRunningTime="2026-01-29 15:44:01.840710128 +0000 UTC m=+985.513564405" watchObservedRunningTime="2026-01-29 15:44:01.844047739 +0000 UTC m=+985.516902006" Jan 29 15:44:05 crc kubenswrapper[5008]: I0129 15:44:05.627836 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.485490 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.486585 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.494654 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.513206 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6t5h\" (UniqueName: \"kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.513309 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.513369 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.614252 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.614345 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6t5h\" (UniqueName: \"kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.614424 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.614775 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.614974 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.634555 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6t5h\" (UniqueName: \"kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.802420 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:07 crc kubenswrapper[5008]: I0129 15:44:07.354396 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:44:07 crc kubenswrapper[5008]: I0129 15:44:07.875746 5008 generic.go:334] "Generic (PLEG): container finished" podID="c82fc869-759d-4902-9aef-fdd69452b420" containerID="5c142c008e193f2bb446f8c2889a9aba1d36db2e12bc749c5dffba8460d0aa0d" exitCode=0 Jan 29 15:44:07 crc kubenswrapper[5008]: I0129 15:44:07.875821 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerDied","Data":"5c142c008e193f2bb446f8c2889a9aba1d36db2e12bc749c5dffba8460d0aa0d"} Jan 29 15:44:07 crc kubenswrapper[5008]: I0129 15:44:07.875851 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerStarted","Data":"debd562bbbd639021d945b4eafb3e69ca2ec6a19be12a7aeaf5f75ffdbc60792"} Jan 29 15:44:08 crc kubenswrapper[5008]: E0129 15:44:08.009610 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:44:08 crc kubenswrapper[5008]: E0129 15:44:08.009873 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6t5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6kzcj_openshift-marketplace(c82fc869-759d-4902-9aef-fdd69452b420): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:08 crc kubenswrapper[5008]: E0129 15:44:08.011165 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:44:08 crc kubenswrapper[5008]: E0129 15:44:08.884148 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:44:13 crc kubenswrapper[5008]: I0129 15:44:13.991227 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:44:13 crc kubenswrapper[5008]: I0129 15:44:13.991629 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.323488 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.325116 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.337258 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.427202 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkwsn\" (UniqueName: \"kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.427258 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.427387 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.528937 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkwsn\" (UniqueName: \"kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.528980 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.529051 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.529889 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.530011 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.547139 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkwsn\" (UniqueName: \"kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.684152 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.972454 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:44:22 crc kubenswrapper[5008]: W0129 15:44:22.983979 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddecefe5c_189e_43f8_88b2_f93a00567c3e.slice/crio-1e9043307f7a755489d3a239db58010b75203626c362242971f41c104845eeea WatchSource:0}: Error finding container 1e9043307f7a755489d3a239db58010b75203626c362242971f41c104845eeea: Status 404 returned error can't find the container with id 1e9043307f7a755489d3a239db58010b75203626c362242971f41c104845eeea Jan 29 15:44:23 crc kubenswrapper[5008]: I0129 15:44:23.121410 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerStarted","Data":"1e9043307f7a755489d3a239db58010b75203626c362242971f41c104845eeea"} Jan 29 15:44:23 crc kubenswrapper[5008]: E0129 15:44:23.452577 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:44:23 crc kubenswrapper[5008]: E0129 15:44:23.452703 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6t5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6kzcj_openshift-marketplace(c82fc869-759d-4902-9aef-fdd69452b420): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:23 crc kubenswrapper[5008]: E0129 15:44:23.454462 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:44:24 crc kubenswrapper[5008]: I0129 15:44:24.132523 5008 generic.go:334] "Generic (PLEG): container finished" podID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerID="11de983cd2749bba71f06017a27d73e928c76c7f26d9aaaadf0259656de48de2" exitCode=0 Jan 29 15:44:24 crc kubenswrapper[5008]: I0129 15:44:24.132618 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerDied","Data":"11de983cd2749bba71f06017a27d73e928c76c7f26d9aaaadf0259656de48de2"} Jan 29 15:44:24 crc kubenswrapper[5008]: E0129 15:44:24.303675 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:44:24 crc kubenswrapper[5008]: E0129 15:44:24.303878 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkwsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9l2c6_openshift-marketplace(decefe5c-189e-43f8-88b2-f93a00567c3e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:24 crc kubenswrapper[5008]: E0129 15:44:24.305069 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:44:25 crc kubenswrapper[5008]: E0129 15:44:25.140376 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.235462 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.236701 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.238864 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pr6jc" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.245069 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.259599 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.260597 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.262536 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bzgh8" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.266452 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.267595 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.270019 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-47pm5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.278343 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.310817 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.311609 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.313663 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vn7c6" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.325964 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.335384 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.339222 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl8lj\" (UniqueName: \"kubernetes.io/projected/7a610d2e-cb71-4995-a0e8-f6dc26f7664a-kube-api-access-tl8lj\") pod \"designate-operator-controller-manager-6d9697b7f4-n4xtj\" (UID: \"7a610d2e-cb71-4995-a0e8-f6dc26f7664a\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.339352 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7j6n\" (UniqueName: \"kubernetes.io/projected/68468eb9-9e76-4f2f-9aba-cc3198e0a241-kube-api-access-j7j6n\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-hh7sg\" (UID: \"68468eb9-9e76-4f2f-9aba-cc3198e0a241\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.339418 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhj9m\" (UniqueName: \"kubernetes.io/projected/6e775178-095e-451d-bded-b83f229c4231-kube-api-access-dhj9m\") pod \"cinder-operator-controller-manager-8d874c8fc-4zrsr\" (UID: \"6e775178-095e-451d-bded-b83f229c4231\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.360850 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.361579 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.363726 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-w2b4n" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.379578 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.380332 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.387854 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.388835 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.392453 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-thwfs" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.397402 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.397572 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tbwkr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.400932 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.418241 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.418922 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.426248 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-jz7r5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.427482 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447376 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7j6n\" (UniqueName: \"kubernetes.io/projected/68468eb9-9e76-4f2f-9aba-cc3198e0a241-kube-api-access-j7j6n\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-hh7sg\" (UID: \"68468eb9-9e76-4f2f-9aba-cc3198e0a241\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447441 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447497 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mtw\" (UniqueName: \"kubernetes.io/projected/4ff89cd9-951e-4907-b60c-a1a1c08007a4-kube-api-access-f2mtw\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447552 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhj9m\" (UniqueName: \"kubernetes.io/projected/6e775178-095e-451d-bded-b83f229c4231-kube-api-access-dhj9m\") pod \"cinder-operator-controller-manager-8d874c8fc-4zrsr\" (UID: \"6e775178-095e-451d-bded-b83f229c4231\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447602 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmws6\" (UniqueName: \"kubernetes.io/projected/cae67616-1145-4057-b304-08a322e78d9d-kube-api-access-qmws6\") pod \"horizon-operator-controller-manager-5fb775575f-qs9wh\" (UID: \"cae67616-1145-4057-b304-08a322e78d9d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447637 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl8lj\" (UniqueName: \"kubernetes.io/projected/7a610d2e-cb71-4995-a0e8-f6dc26f7664a-kube-api-access-tl8lj\") pod \"designate-operator-controller-manager-6d9697b7f4-n4xtj\" (UID: \"7a610d2e-cb71-4995-a0e8-f6dc26f7664a\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447708 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wxlp\" (UniqueName: \"kubernetes.io/projected/b46e3eea-2330-4b3f-b45d-34ae38a0dde9-kube-api-access-8wxlp\") pod \"heat-operator-controller-manager-69d6db494d-9sf7f\" (UID: \"b46e3eea-2330-4b3f-b45d-34ae38a0dde9\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447771 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45p6s\" (UniqueName: \"kubernetes.io/projected/94a4547d-0c92-41e4-8ca7-64e21df1708e-kube-api-access-45p6s\") pod \"glance-operator-controller-manager-8886f4c47-s4fq5\" (UID: \"94a4547d-0c92-41e4-8ca7-64e21df1708e\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.448853 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.457838 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.459606 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.467222 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-j6css" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.469465 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.471549 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.478994 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.483439 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-sdg77" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.494806 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhj9m\" (UniqueName: \"kubernetes.io/projected/6e775178-095e-451d-bded-b83f229c4231-kube-api-access-dhj9m\") pod \"cinder-operator-controller-manager-8d874c8fc-4zrsr\" (UID: \"6e775178-095e-451d-bded-b83f229c4231\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.506375 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7j6n\" (UniqueName: \"kubernetes.io/projected/68468eb9-9e76-4f2f-9aba-cc3198e0a241-kube-api-access-j7j6n\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-hh7sg\" (UID: \"68468eb9-9e76-4f2f-9aba-cc3198e0a241\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.506725 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.507173 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl8lj\" (UniqueName: \"kubernetes.io/projected/7a610d2e-cb71-4995-a0e8-f6dc26f7664a-kube-api-access-tl8lj\") pod \"designate-operator-controller-manager-6d9697b7f4-n4xtj\" (UID: \"7a610d2e-cb71-4995-a0e8-f6dc26f7664a\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.552497 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mtw\" (UniqueName: \"kubernetes.io/projected/4ff89cd9-951e-4907-b60c-a1a1c08007a4-kube-api-access-f2mtw\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.552764 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk5pq\" (UniqueName: \"kubernetes.io/projected/e57e9a97-d32e-4464-b12c-ba44a4643ada-kube-api-access-wk5pq\") pod \"manila-operator-controller-manager-7dd968899f-q7khh\" (UID: \"e57e9a97-d32e-4464-b12c-ba44a4643ada\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.552918 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmws6\" (UniqueName: \"kubernetes.io/projected/cae67616-1145-4057-b304-08a322e78d9d-kube-api-access-qmws6\") pod \"horizon-operator-controller-manager-5fb775575f-qs9wh\" (UID: \"cae67616-1145-4057-b304-08a322e78d9d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.553054 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wxlp\" (UniqueName: \"kubernetes.io/projected/b46e3eea-2330-4b3f-b45d-34ae38a0dde9-kube-api-access-8wxlp\") pod \"heat-operator-controller-manager-69d6db494d-9sf7f\" (UID: \"b46e3eea-2330-4b3f-b45d-34ae38a0dde9\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.553165 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7jgj\" (UniqueName: \"kubernetes.io/projected/e76346a9-7ba5-4178-82b7-da9f0c337c08-kube-api-access-j7jgj\") pod \"keystone-operator-controller-manager-84f48565d4-qhwnb\" (UID: \"e76346a9-7ba5-4178-82b7-da9f0c337c08\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.553264 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jp82\" (UniqueName: \"kubernetes.io/projected/6196a4fd-8576-412f-9140-cf61b98444a4-kube-api-access-9jp82\") pod \"ironic-operator-controller-manager-5f4b8bd54d-ncxxj\" (UID: \"6196a4fd-8576-412f-9140-cf61b98444a4\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.553351 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45p6s\" (UniqueName: \"kubernetes.io/projected/94a4547d-0c92-41e4-8ca7-64e21df1708e-kube-api-access-45p6s\") pod \"glance-operator-controller-manager-8886f4c47-s4fq5\" (UID: \"94a4547d-0c92-41e4-8ca7-64e21df1708e\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.553462 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: E0129 15:44:30.553678 5008 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:30 crc kubenswrapper[5008]: E0129 15:44:30.553832 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert podName:4ff89cd9-951e-4907-b60c-a1a1c08007a4 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:31.053801648 +0000 UTC m=+1014.726655895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert") pod "infra-operator-controller-manager-79955696d6-zvcs5" (UID: "4ff89cd9-951e-4907-b60c-a1a1c08007a4") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.557947 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.558410 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.563026 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.564320 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.576907 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.578171 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.589697 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-fs9lw" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.589970 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6hhjd" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.590544 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.598022 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmws6\" (UniqueName: \"kubernetes.io/projected/cae67616-1145-4057-b304-08a322e78d9d-kube-api-access-qmws6\") pod \"horizon-operator-controller-manager-5fb775575f-qs9wh\" (UID: \"cae67616-1145-4057-b304-08a322e78d9d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.604620 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.604898 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.606949 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wxlp\" (UniqueName: \"kubernetes.io/projected/b46e3eea-2330-4b3f-b45d-34ae38a0dde9-kube-api-access-8wxlp\") pod \"heat-operator-controller-manager-69d6db494d-9sf7f\" (UID: \"b46e3eea-2330-4b3f-b45d-34ae38a0dde9\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.611339 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mtw\" (UniqueName: \"kubernetes.io/projected/4ff89cd9-951e-4907-b60c-a1a1c08007a4-kube-api-access-f2mtw\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.622582 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45p6s\" (UniqueName: \"kubernetes.io/projected/94a4547d-0c92-41e4-8ca7-64e21df1708e-kube-api-access-45p6s\") pod \"glance-operator-controller-manager-8886f4c47-s4fq5\" (UID: \"94a4547d-0c92-41e4-8ca7-64e21df1708e\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.625462 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.626538 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.628062 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sql9t" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.630622 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.654534 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmdcr\" (UniqueName: \"kubernetes.io/projected/14020423-5911-4b69-8889-b12267c9bbf9-kube-api-access-gmdcr\") pod \"neutron-operator-controller-manager-585dbc889-44qcp\" (UID: \"14020423-5911-4b69-8889-b12267c9bbf9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.654593 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk5pq\" (UniqueName: \"kubernetes.io/projected/e57e9a97-d32e-4464-b12c-ba44a4643ada-kube-api-access-wk5pq\") pod \"manila-operator-controller-manager-7dd968899f-q7khh\" (UID: \"e57e9a97-d32e-4464-b12c-ba44a4643ada\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.654645 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hpbg\" (UniqueName: \"kubernetes.io/projected/d39876a5-4ca3-44e2-a4c5-c6541c2ec812-kube-api-access-8hpbg\") pod \"mariadb-operator-controller-manager-67bf948998-bjjwz\" (UID: \"d39876a5-4ca3-44e2-a4c5-c6541c2ec812\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.654685 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7jgj\" (UniqueName: \"kubernetes.io/projected/e76346a9-7ba5-4178-82b7-da9f0c337c08-kube-api-access-j7jgj\") pod \"keystone-operator-controller-manager-84f48565d4-qhwnb\" (UID: \"e76346a9-7ba5-4178-82b7-da9f0c337c08\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.654716 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jp82\" (UniqueName: \"kubernetes.io/projected/6196a4fd-8576-412f-9140-cf61b98444a4-kube-api-access-9jp82\") pod \"ironic-operator-controller-manager-5f4b8bd54d-ncxxj\" (UID: \"6196a4fd-8576-412f-9140-cf61b98444a4\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.666386 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.674442 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.677272 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jp82\" (UniqueName: \"kubernetes.io/projected/6196a4fd-8576-412f-9140-cf61b98444a4-kube-api-access-9jp82\") pod \"ironic-operator-controller-manager-5f4b8bd54d-ncxxj\" (UID: \"6196a4fd-8576-412f-9140-cf61b98444a4\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.677908 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk5pq\" (UniqueName: \"kubernetes.io/projected/e57e9a97-d32e-4464-b12c-ba44a4643ada-kube-api-access-wk5pq\") pod \"manila-operator-controller-manager-7dd968899f-q7khh\" (UID: \"e57e9a97-d32e-4464-b12c-ba44a4643ada\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.684190 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7jgj\" (UniqueName: \"kubernetes.io/projected/e76346a9-7ba5-4178-82b7-da9f0c337c08-kube-api-access-j7jgj\") pod \"keystone-operator-controller-manager-84f48565d4-qhwnb\" (UID: \"e76346a9-7ba5-4178-82b7-da9f0c337c08\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.688262 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.689829 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.690739 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.693912 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-nbtjx" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.702104 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.724759 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.726053 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.727272 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.727440 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.728335 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wmglr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.748649 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.755452 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpbg\" (UniqueName: \"kubernetes.io/projected/d39876a5-4ca3-44e2-a4c5-c6541c2ec812-kube-api-access-8hpbg\") pod \"mariadb-operator-controller-manager-67bf948998-bjjwz\" (UID: \"d39876a5-4ca3-44e2-a4c5-c6541c2ec812\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.755516 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnhz\" (UniqueName: \"kubernetes.io/projected/27a92a88-ee29-47fd-b4cf-5e3232ce7573-kube-api-access-ccnhz\") pod \"nova-operator-controller-manager-55bff696bd-klqvj\" (UID: \"27a92a88-ee29-47fd-b4cf-5e3232ce7573\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.755541 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t59lc\" (UniqueName: \"kubernetes.io/projected/4dc123ee-b76c-46a7-9aea-76457232036b-kube-api-access-t59lc\") pod \"octavia-operator-controller-manager-6687f8d877-zbddd\" (UID: \"4dc123ee-b76c-46a7-9aea-76457232036b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.755593 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmdcr\" (UniqueName: \"kubernetes.io/projected/14020423-5911-4b69-8889-b12267c9bbf9-kube-api-access-gmdcr\") pod \"neutron-operator-controller-manager-585dbc889-44qcp\" (UID: \"14020423-5911-4b69-8889-b12267c9bbf9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.761683 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.769188 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.770195 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.770486 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.771234 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.772198 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hpbg\" (UniqueName: \"kubernetes.io/projected/d39876a5-4ca3-44e2-a4c5-c6541c2ec812-kube-api-access-8hpbg\") pod \"mariadb-operator-controller-manager-67bf948998-bjjwz\" (UID: \"d39876a5-4ca3-44e2-a4c5-c6541c2ec812\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.772210 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4fkt4" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.777123 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-s9k57" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.780867 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmdcr\" (UniqueName: \"kubernetes.io/projected/14020423-5911-4b69-8889-b12267c9bbf9-kube-api-access-gmdcr\") pod \"neutron-operator-controller-manager-585dbc889-44qcp\" (UID: \"14020423-5911-4b69-8889-b12267c9bbf9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.786439 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.788141 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.796462 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.802092 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-54s5g" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.808572 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.840556 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857276 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twwv9\" (UniqueName: \"kubernetes.io/projected/a9dfe223-8569-48bb-8b52-c3fb069208a0-kube-api-access-twwv9\") pod \"swift-operator-controller-manager-68fc8c869-84h7l\" (UID: \"a9dfe223-8569-48bb-8b52-c3fb069208a0\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857344 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857391 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccnhz\" (UniqueName: \"kubernetes.io/projected/27a92a88-ee29-47fd-b4cf-5e3232ce7573-kube-api-access-ccnhz\") pod \"nova-operator-controller-manager-55bff696bd-klqvj\" (UID: \"27a92a88-ee29-47fd-b4cf-5e3232ce7573\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857413 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t59lc\" (UniqueName: \"kubernetes.io/projected/4dc123ee-b76c-46a7-9aea-76457232036b-kube-api-access-t59lc\") pod \"octavia-operator-controller-manager-6687f8d877-zbddd\" (UID: \"4dc123ee-b76c-46a7-9aea-76457232036b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857436 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njtnx\" (UniqueName: \"kubernetes.io/projected/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-kube-api-access-njtnx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857470 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wdbm\" (UniqueName: \"kubernetes.io/projected/ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f-kube-api-access-8wdbm\") pod \"placement-operator-controller-manager-5b964cf4cd-xjf4m\" (UID: \"ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857488 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktqqm\" (UniqueName: \"kubernetes.io/projected/cb2d6253-7fa7-41a9-9d0b-002ef590c4db-kube-api-access-ktqqm\") pod \"ovn-operator-controller-manager-788c46999f-qjtzq\" (UID: \"cb2d6253-7fa7-41a9-9d0b-002ef590c4db\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.865299 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.866154 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.870351 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-9jzzm" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.871085 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.878276 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t59lc\" (UniqueName: \"kubernetes.io/projected/4dc123ee-b76c-46a7-9aea-76457232036b-kube-api-access-t59lc\") pod \"octavia-operator-controller-manager-6687f8d877-zbddd\" (UID: \"4dc123ee-b76c-46a7-9aea-76457232036b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.915357 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccnhz\" (UniqueName: \"kubernetes.io/projected/27a92a88-ee29-47fd-b4cf-5e3232ce7573-kube-api-access-ccnhz\") pod \"nova-operator-controller-manager-55bff696bd-klqvj\" (UID: \"27a92a88-ee29-47fd-b4cf-5e3232ce7573\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.920307 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.959506 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktqqm\" (UniqueName: \"kubernetes.io/projected/cb2d6253-7fa7-41a9-9d0b-002ef590c4db-kube-api-access-ktqqm\") pod \"ovn-operator-controller-manager-788c46999f-qjtzq\" (UID: \"cb2d6253-7fa7-41a9-9d0b-002ef590c4db\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.969221 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twwv9\" (UniqueName: \"kubernetes.io/projected/a9dfe223-8569-48bb-8b52-c3fb069208a0-kube-api-access-twwv9\") pod \"swift-operator-controller-manager-68fc8c869-84h7l\" (UID: \"a9dfe223-8569-48bb-8b52-c3fb069208a0\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.969281 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv9f4\" (UniqueName: \"kubernetes.io/projected/30b3e5fd-7f41-4ed9-a1de-cb282994ad38-kube-api-access-jv9f4\") pod \"telemetry-operator-controller-manager-64b5b76f97-bbsft\" (UID: \"30b3e5fd-7f41-4ed9-a1de-cb282994ad38\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.969358 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.969512 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njtnx\" (UniqueName: \"kubernetes.io/projected/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-kube-api-access-njtnx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.969580 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wdbm\" (UniqueName: \"kubernetes.io/projected/ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f-kube-api-access-8wdbm\") pod \"placement-operator-controller-manager-5b964cf4cd-xjf4m\" (UID: \"ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:44:30 crc kubenswrapper[5008]: E0129 15:44:30.970367 5008 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:30 crc kubenswrapper[5008]: E0129 15:44:30.970434 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert podName:9f5d1ef8-a9b5-428a-b441-b7d763dbd102 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:31.470415719 +0000 UTC m=+1015.143269966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" (UID: "9f5d1ef8-a9b5-428a-b441-b7d763dbd102") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.974578 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.988364 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.989690 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.004124 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-q9vj4" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.004642 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.018881 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.026276 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.047071 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.047620 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktqqm\" (UniqueName: \"kubernetes.io/projected/cb2d6253-7fa7-41a9-9d0b-002ef590c4db-kube-api-access-ktqqm\") pod \"ovn-operator-controller-manager-788c46999f-qjtzq\" (UID: \"cb2d6253-7fa7-41a9-9d0b-002ef590c4db\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.052171 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wdbm\" (UniqueName: \"kubernetes.io/projected/ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f-kube-api-access-8wdbm\") pod \"placement-operator-controller-manager-5b964cf4cd-xjf4m\" (UID: \"ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.054755 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njtnx\" (UniqueName: \"kubernetes.io/projected/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-kube-api-access-njtnx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.056590 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twwv9\" (UniqueName: \"kubernetes.io/projected/a9dfe223-8569-48bb-8b52-c3fb069208a0-kube-api-access-twwv9\") pod \"swift-operator-controller-manager-68fc8c869-84h7l\" (UID: \"a9dfe223-8569-48bb-8b52-c3fb069208a0\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.071344 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv9f4\" (UniqueName: \"kubernetes.io/projected/30b3e5fd-7f41-4ed9-a1de-cb282994ad38-kube-api-access-jv9f4\") pod \"telemetry-operator-controller-manager-64b5b76f97-bbsft\" (UID: \"30b3e5fd-7f41-4ed9-a1de-cb282994ad38\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.071428 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm9mw\" (UniqueName: \"kubernetes.io/projected/d4fd527b-7108-4f94-b7a9-bb0b358b8c3c-kube-api-access-bm9mw\") pod \"test-operator-controller-manager-56f8bfcd9f-fxz5k\" (UID: \"d4fd527b-7108-4f94-b7a9-bb0b358b8c3c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.071506 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.071704 5008 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.071770 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert podName:4ff89cd9-951e-4907-b60c-a1a1c08007a4 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:32.07174874 +0000 UTC m=+1015.744602987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert") pod "infra-operator-controller-manager-79955696d6-zvcs5" (UID: "4ff89cd9-951e-4907-b60c-a1a1c08007a4") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.097021 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.117447 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv9f4\" (UniqueName: \"kubernetes.io/projected/30b3e5fd-7f41-4ed9-a1de-cb282994ad38-kube-api-access-jv9f4\") pod \"telemetry-operator-controller-manager-64b5b76f97-bbsft\" (UID: \"30b3e5fd-7f41-4ed9-a1de-cb282994ad38\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.125151 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-dwhc5"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.126109 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.138677 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vg7sf" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.142154 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.151290 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-dwhc5"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.173648 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc2vx\" (UniqueName: \"kubernetes.io/projected/a2163508-5800-4d97-b8d4-1f3815764822-kube-api-access-fc2vx\") pod \"watcher-operator-controller-manager-564965969-dwhc5\" (UID: \"a2163508-5800-4d97-b8d4-1f3815764822\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.173723 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm9mw\" (UniqueName: \"kubernetes.io/projected/d4fd527b-7108-4f94-b7a9-bb0b358b8c3c-kube-api-access-bm9mw\") pod \"test-operator-controller-manager-56f8bfcd9f-fxz5k\" (UID: \"d4fd527b-7108-4f94-b7a9-bb0b358b8c3c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.190814 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.191595 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.199430 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.201285 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wddh7" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.201772 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.201878 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.208108 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.208507 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.213983 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.217286 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm9mw\" (UniqueName: \"kubernetes.io/projected/d4fd527b-7108-4f94-b7a9-bb0b358b8c3c-kube-api-access-bm9mw\") pod \"test-operator-controller-manager-56f8bfcd9f-fxz5k\" (UID: \"d4fd527b-7108-4f94-b7a9-bb0b358b8c3c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.225206 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.225991 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.232488 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-hzrzg" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.236689 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.271816 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.274740 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc2vx\" (UniqueName: \"kubernetes.io/projected/a2163508-5800-4d97-b8d4-1f3815764822-kube-api-access-fc2vx\") pod \"watcher-operator-controller-manager-564965969-dwhc5\" (UID: \"a2163508-5800-4d97-b8d4-1f3815764822\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.274821 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.274864 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v2kd\" (UniqueName: \"kubernetes.io/projected/1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1-kube-api-access-9v2kd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vtv85\" (UID: \"1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.274969 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.275017 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qngtp\" (UniqueName: \"kubernetes.io/projected/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-kube-api-access-qngtp\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.294081 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc2vx\" (UniqueName: \"kubernetes.io/projected/a2163508-5800-4d97-b8d4-1f3815764822-kube-api-access-fc2vx\") pod \"watcher-operator-controller-manager-564965969-dwhc5\" (UID: \"a2163508-5800-4d97-b8d4-1f3815764822\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:44:31 crc kubenswrapper[5008]: W0129 15:44:31.325165 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68468eb9_9e76_4f2f_9aba_cc3198e0a241.slice/crio-3d232da1d12d8a44b3ec70cc00ef25b778881680090ad4976d0d2a644ce54a37 WatchSource:0}: Error finding container 3d232da1d12d8a44b3ec70cc00ef25b778881680090ad4976d0d2a644ce54a37: Status 404 returned error can't find the container with id 3d232da1d12d8a44b3ec70cc00ef25b778881680090ad4976d0d2a644ce54a37 Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.376217 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.376268 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v2kd\" (UniqueName: \"kubernetes.io/projected/1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1-kube-api-access-9v2kd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vtv85\" (UID: \"1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.376339 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.376354 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qngtp\" (UniqueName: \"kubernetes.io/projected/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-kube-api-access-qngtp\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.376591 5008 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.376640 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:31.876625157 +0000 UTC m=+1015.549479394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.376807 5008 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.376830 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:31.876822241 +0000 UTC m=+1015.549676478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "metrics-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.401606 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qngtp\" (UniqueName: \"kubernetes.io/projected/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-kube-api-access-qngtp\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.403049 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v2kd\" (UniqueName: \"kubernetes.io/projected/1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1-kube-api-access-9v2kd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vtv85\" (UID: \"1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.462765 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.477184 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.477352 5008 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.477410 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert podName:9f5d1ef8-a9b5-428a-b441-b7d763dbd102 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:32.477390495 +0000 UTC m=+1016.150244722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" (UID: "9f5d1ef8-a9b5-428a-b441-b7d763dbd102") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.502644 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.535568 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.572022 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.604817 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5"] Jan 29 15:44:31 crc kubenswrapper[5008]: W0129 15:44:31.613748 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a610d2e_cb71_4995_a0e8_f6dc26f7664a.slice/crio-ad010e20bd793718773a829e00bceaaf737720bc9e767a94b3d7b0cedaef882a WatchSource:0}: Error finding container ad010e20bd793718773a829e00bceaaf737720bc9e767a94b3d7b0cedaef882a: Status 404 returned error can't find the container with id ad010e20bd793718773a829e00bceaaf737720bc9e767a94b3d7b0cedaef882a Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.643287 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.809932 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj"] Jan 29 15:44:31 crc kubenswrapper[5008]: W0129 15:44:31.817490 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6196a4fd_8576_412f_9140_cf61b98444a4.slice/crio-a2a3a6e0cffbfd6306fc625be972ed0eb25454e7e7165c2cf379d38ef8d2da9d WatchSource:0}: Error finding container a2a3a6e0cffbfd6306fc625be972ed0eb25454e7e7165c2cf379d38ef8d2da9d: Status 404 returned error can't find the container with id a2a3a6e0cffbfd6306fc625be972ed0eb25454e7e7165c2cf379d38ef8d2da9d Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.832412 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.852249 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.860659 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.865240 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb"] Jan 29 15:44:31 crc kubenswrapper[5008]: W0129 15:44:31.870091 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode76346a9_7ba5_4178_82b7_da9f0c337c08.slice/crio-6652cb646b3b7d0ab6ba65718228dc7b25fd97f841e0cea62d5a095f5a9134f4 WatchSource:0}: Error finding container 6652cb646b3b7d0ab6ba65718228dc7b25fd97f841e0cea62d5a095f5a9134f4: Status 404 returned error can't find the container with id 6652cb646b3b7d0ab6ba65718228dc7b25fd97f841e0cea62d5a095f5a9134f4 Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.884553 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.884755 5008 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.884848 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:32.884827892 +0000 UTC m=+1016.557682129 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.885499 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.885680 5008 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.885749 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:32.885738564 +0000 UTC m=+1016.558592801 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "metrics-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: W0129 15:44:31.988565 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcae67616_1145_4057_b304_08a322e78d9d.slice/crio-d10614f8dd7d7a5bb2be552e4ef5b438b007d215205528e2c063e8ed18e6f09b WatchSource:0}: Error finding container d10614f8dd7d7a5bb2be552e4ef5b438b007d215205528e2c063e8ed18e6f09b: Status 404 returned error can't find the container with id d10614f8dd7d7a5bb2be552e4ef5b438b007d215205528e2c063e8ed18e6f09b Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.991581 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh"] Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.090328 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.090574 5008 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.090635 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert podName:4ff89cd9-951e-4907-b60c-a1a1c08007a4 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:34.090616651 +0000 UTC m=+1017.763470888 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert") pod "infra-operator-controller-manager-79955696d6-zvcs5" (UID: "4ff89cd9-951e-4907-b60c-a1a1c08007a4") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.222116 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" event={"ID":"e57e9a97-d32e-4464-b12c-ba44a4643ada","Type":"ContainerStarted","Data":"bd8c840c67bae01776abbad88025e01f5c74f210aa6629db99a41c527fef445e"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.225701 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" event={"ID":"b46e3eea-2330-4b3f-b45d-34ae38a0dde9","Type":"ContainerStarted","Data":"1804d8ec1ba634ff71f1d2c85315037de33f1ad0a73faafbc77fa78b681ea28c"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.227393 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" event={"ID":"94a4547d-0c92-41e4-8ca7-64e21df1708e","Type":"ContainerStarted","Data":"237fbc2d054c2f89367ff13756a12bae6b601f1988235ad14ac42243a4b6c2a1"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.228656 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" event={"ID":"68468eb9-9e76-4f2f-9aba-cc3198e0a241","Type":"ContainerStarted","Data":"3d232da1d12d8a44b3ec70cc00ef25b778881680090ad4976d0d2a644ce54a37"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.229691 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" event={"ID":"6e775178-095e-451d-bded-b83f229c4231","Type":"ContainerStarted","Data":"657e1185611b2c6ff407043eba326f6775afb83d8d05971076623006954aea79"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.231324 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" event={"ID":"6196a4fd-8576-412f-9140-cf61b98444a4","Type":"ContainerStarted","Data":"a2a3a6e0cffbfd6306fc625be972ed0eb25454e7e7165c2cf379d38ef8d2da9d"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.233205 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" event={"ID":"7a610d2e-cb71-4995-a0e8-f6dc26f7664a","Type":"ContainerStarted","Data":"ad010e20bd793718773a829e00bceaaf737720bc9e767a94b3d7b0cedaef882a"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.234406 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" event={"ID":"e76346a9-7ba5-4178-82b7-da9f0c337c08","Type":"ContainerStarted","Data":"6652cb646b3b7d0ab6ba65718228dc7b25fd97f841e0cea62d5a095f5a9134f4"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.235461 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" event={"ID":"cae67616-1145-4057-b304-08a322e78d9d","Type":"ContainerStarted","Data":"d10614f8dd7d7a5bb2be552e4ef5b438b007d215205528e2c063e8ed18e6f09b"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.237172 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" event={"ID":"14020423-5911-4b69-8889-b12267c9bbf9","Type":"ContainerStarted","Data":"88fac025880987908d4db4aad128cb341cbaa0305f6767863d2b7c09a983a405"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.304536 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz"] Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.313069 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k"] Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.319227 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l"] Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.330485 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9dfe223_8569_48bb_8b52_c3fb069208a0.slice/crio-d95190da8b1d1cf0d9173e5fb1333b86e3cda4b0ef9925e24ee8499803ac6029 WatchSource:0}: Error finding container d95190da8b1d1cf0d9173e5fb1333b86e3cda4b0ef9925e24ee8499803ac6029: Status 404 returned error can't find the container with id d95190da8b1d1cf0d9173e5fb1333b86e3cda4b0ef9925e24ee8499803ac6029 Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.331447 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85"] Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.346161 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft"] Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.352951 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd39876a5_4ca3_44e2_a4c5_c6541c2ec812.slice/crio-ff36cc088f2bf21458f26092f66a8ae7788edd2709aa089f82a44198df91ea75 WatchSource:0}: Error finding container ff36cc088f2bf21458f26092f66a8ae7788edd2709aa089f82a44198df91ea75: Status 404 returned error can't find the container with id ff36cc088f2bf21458f26092f66a8ae7788edd2709aa089f82a44198df91ea75 Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.357389 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq"] Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.365259 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4fd527b_7108_4f94_b7a9_bb0b358b8c3c.slice/crio-3bec6bc83ff11a8487c6fef8f9ec3b49e1d7151f2b6f6cc9dccb913dfb03c0b5 WatchSource:0}: Error finding container 3bec6bc83ff11a8487c6fef8f9ec3b49e1d7151f2b6f6cc9dccb913dfb03c0b5: Status 404 returned error can't find the container with id 3bec6bc83ff11a8487c6fef8f9ec3b49e1d7151f2b6f6cc9dccb913dfb03c0b5 Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.366123 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd"] Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.372056 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb2d6253_7fa7_41a9_9d0b_002ef590c4db.slice/crio-844c01bcf4e698f15c70fc0fd69337379e0f7e4bcb9d3bfe5be35978382b802c WatchSource:0}: Error finding container 844c01bcf4e698f15c70fc0fd69337379e0f7e4bcb9d3bfe5be35978382b802c: Status 404 returned error can't find the container with id 844c01bcf4e698f15c70fc0fd69337379e0f7e4bcb9d3bfe5be35978382b802c Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.375508 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-dwhc5"] Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.380562 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj"] Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.381637 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jv9f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-bbsft_openstack-operators(30b3e5fd-7f41-4ed9-a1de-cb282994ad38): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.382618 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2163508_5800_4d97_b8d4_1f3815764822.slice/crio-ba6c5d5cebc3e92d6f8eb09049150ebbabbf36f182b5a8c8e2960450b3c182de WatchSource:0}: Error finding container ba6c5d5cebc3e92d6f8eb09049150ebbabbf36f182b5a8c8e2960450b3c182de: Status 404 returned error can't find the container with id ba6c5d5cebc3e92d6f8eb09049150ebbabbf36f182b5a8c8e2960450b3c182de Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.382715 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" podUID="30b3e5fd-7f41-4ed9-a1de-cb282994ad38" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.384642 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9v2kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vtv85_openstack-operators(1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.385734 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podUID="1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1" Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.386308 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce6a1921_bd9b_47c4_8f5f_9443d8e4c08f.slice/crio-14639791b938757787bd6781918c0ef1dbe334455a0aa388d5b9ef1d618efec1 WatchSource:0}: Error finding container 14639791b938757787bd6781918c0ef1dbe334455a0aa388d5b9ef1d618efec1: Status 404 returned error can't find the container with id 14639791b938757787bd6781918c0ef1dbe334455a0aa388d5b9ef1d618efec1 Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.386693 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m"] Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.387761 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fc2vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-dwhc5_openstack-operators(a2163508-5800-4d97-b8d4-1f3815764822): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.388653 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27a92a88_ee29_47fd_b4cf_5e3232ce7573.slice/crio-87837d9d3e870c62d7cd0646c9d0135cace81c628da074f49ead5115fe548168 WatchSource:0}: Error finding container 87837d9d3e870c62d7cd0646c9d0135cace81c628da074f49ead5115fe548168: Status 404 returned error can't find the container with id 87837d9d3e870c62d7cd0646c9d0135cace81c628da074f49ead5115fe548168 Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.389130 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" podUID="a2163508-5800-4d97-b8d4-1f3815764822" Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.389484 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dc123ee_b76c_46a7_9aea_76457232036b.slice/crio-5079362c6381da71082ca15f5ef72d62bbac4279eec714faabcc1c6cc444a4ab WatchSource:0}: Error finding container 5079362c6381da71082ca15f5ef72d62bbac4279eec714faabcc1c6cc444a4ab: Status 404 returned error can't find the container with id 5079362c6381da71082ca15f5ef72d62bbac4279eec714faabcc1c6cc444a4ab Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.391359 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8wdbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-xjf4m_openstack-operators(ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.391697 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccnhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-klqvj_openstack-operators(27a92a88-ee29-47fd-b4cf-5e3232ce7573): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.392613 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" podUID="ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.392712 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t59lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-zbddd_openstack-operators(4dc123ee-b76c-46a7-9aea-76457232036b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.392806 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" podUID="27a92a88-ee29-47fd-b4cf-5e3232ce7573" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.394315 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" podUID="4dc123ee-b76c-46a7-9aea-76457232036b" Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.496636 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.496920 5008 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.497043 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert podName:9f5d1ef8-a9b5-428a-b441-b7d763dbd102 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:34.497018924 +0000 UTC m=+1018.169873161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" (UID: "9f5d1ef8-a9b5-428a-b441-b7d763dbd102") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.902835 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.902923 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.903031 5008 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.903039 5008 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.903098 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:34.903081778 +0000 UTC m=+1018.575936015 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.903114 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:34.903107569 +0000 UTC m=+1018.575961806 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "metrics-server-cert" not found Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.249731 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" event={"ID":"1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1","Type":"ContainerStarted","Data":"d347bb26baa2b5e2390ee7830502bf1d18ba28d924e711333fc6862bfaf47ba2"} Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.251526 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" event={"ID":"cb2d6253-7fa7-41a9-9d0b-002ef590c4db","Type":"ContainerStarted","Data":"844c01bcf4e698f15c70fc0fd69337379e0f7e4bcb9d3bfe5be35978382b802c"} Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.253169 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podUID="1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.254360 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" event={"ID":"27a92a88-ee29-47fd-b4cf-5e3232ce7573","Type":"ContainerStarted","Data":"87837d9d3e870c62d7cd0646c9d0135cace81c628da074f49ead5115fe548168"} Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.256707 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" podUID="27a92a88-ee29-47fd-b4cf-5e3232ce7573" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.257326 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" event={"ID":"4dc123ee-b76c-46a7-9aea-76457232036b","Type":"ContainerStarted","Data":"5079362c6381da71082ca15f5ef72d62bbac4279eec714faabcc1c6cc444a4ab"} Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.261556 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" podUID="4dc123ee-b76c-46a7-9aea-76457232036b" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.273543 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" event={"ID":"a9dfe223-8569-48bb-8b52-c3fb069208a0","Type":"ContainerStarted","Data":"d95190da8b1d1cf0d9173e5fb1333b86e3cda4b0ef9925e24ee8499803ac6029"} Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.277248 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" event={"ID":"ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f","Type":"ContainerStarted","Data":"14639791b938757787bd6781918c0ef1dbe334455a0aa388d5b9ef1d618efec1"} Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.278482 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" event={"ID":"d4fd527b-7108-4f94-b7a9-bb0b358b8c3c","Type":"ContainerStarted","Data":"3bec6bc83ff11a8487c6fef8f9ec3b49e1d7151f2b6f6cc9dccb913dfb03c0b5"} Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.279023 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" podUID="ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.280675 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" event={"ID":"d39876a5-4ca3-44e2-a4c5-c6541c2ec812","Type":"ContainerStarted","Data":"ff36cc088f2bf21458f26092f66a8ae7788edd2709aa089f82a44198df91ea75"} Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.285694 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" event={"ID":"a2163508-5800-4d97-b8d4-1f3815764822","Type":"ContainerStarted","Data":"ba6c5d5cebc3e92d6f8eb09049150ebbabbf36f182b5a8c8e2960450b3c182de"} Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.287884 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" event={"ID":"30b3e5fd-7f41-4ed9-a1de-cb282994ad38","Type":"ContainerStarted","Data":"32e4d6aa080b5c9dcae2577e2453dcc86db2538c8f7f8833e02b71d908e785f6"} Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.288056 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" podUID="a2163508-5800-4d97-b8d4-1f3815764822" Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.289181 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" podUID="30b3e5fd-7f41-4ed9-a1de-cb282994ad38" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.512833 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.514492 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.522380 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.617464 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.617559 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg272\" (UniqueName: \"kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.617614 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.719361 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.719460 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg272\" (UniqueName: \"kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.719506 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.719965 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.720001 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.761443 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg272\" (UniqueName: \"kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.834909 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:34 crc kubenswrapper[5008]: I0129 15:44:34.124937 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.125121 5008 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.125424 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert podName:4ff89cd9-951e-4907-b60c-a1a1c08007a4 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:38.125402102 +0000 UTC m=+1021.798256339 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert") pod "infra-operator-controller-manager-79955696d6-zvcs5" (UID: "4ff89cd9-951e-4907-b60c-a1a1c08007a4") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.297379 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" podUID="a2163508-5800-4d97-b8d4-1f3815764822" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.297909 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" podUID="4dc123ee-b76c-46a7-9aea-76457232036b" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.297927 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" podUID="30b3e5fd-7f41-4ed9-a1de-cb282994ad38" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.297969 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" podUID="27a92a88-ee29-47fd-b4cf-5e3232ce7573" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.297982 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podUID="1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.298018 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" podUID="ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f" Jan 29 15:44:34 crc kubenswrapper[5008]: I0129 15:44:34.345720 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:44:34 crc kubenswrapper[5008]: I0129 15:44:34.535832 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.536050 5008 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.536149 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert podName:9f5d1ef8-a9b5-428a-b441-b7d763dbd102 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:38.536125699 +0000 UTC m=+1022.208979986 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" (UID: "9f5d1ef8-a9b5-428a-b441-b7d763dbd102") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: I0129 15:44:34.941555 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:34 crc kubenswrapper[5008]: I0129 15:44:34.941649 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.941809 5008 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.941821 5008 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.941861 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:38.941843305 +0000 UTC m=+1022.614697542 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.941915 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:38.941892327 +0000 UTC m=+1022.614746614 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "metrics-server-cert" not found Jan 29 15:44:36 crc kubenswrapper[5008]: E0129 15:44:36.523572 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:44:36 crc kubenswrapper[5008]: E0129 15:44:36.524035 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkwsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9l2c6_openshift-marketplace(decefe5c-189e-43f8-88b2-f93a00567c3e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:36 crc kubenswrapper[5008]: E0129 15:44:36.525683 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:44:37 crc kubenswrapper[5008]: I0129 15:44:37.315051 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerStarted","Data":"3d4dceb557efb379fc43836d7c0b6854e7a45385d099f1155ac83813cd0b127b"} Jan 29 15:44:38 crc kubenswrapper[5008]: I0129 15:44:38.186583 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.186753 5008 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.186828 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert podName:4ff89cd9-951e-4907-b60c-a1a1c08007a4 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:46.186810446 +0000 UTC m=+1029.859664683 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert") pod "infra-operator-controller-manager-79955696d6-zvcs5" (UID: "4ff89cd9-951e-4907-b60c-a1a1c08007a4") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: I0129 15:44:38.593043 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.593246 5008 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.593331 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert podName:9f5d1ef8-a9b5-428a-b441-b7d763dbd102 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:46.593309661 +0000 UTC m=+1030.266163898 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" (UID: "9f5d1ef8-a9b5-428a-b441-b7d763dbd102") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: I0129 15:44:38.999037 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:38 crc kubenswrapper[5008]: I0129 15:44:38.999248 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.999302 5008 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.999422 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:46.999391476 +0000 UTC m=+1030.672245743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.999465 5008 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.999518 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:46.999502679 +0000 UTC m=+1030.672356936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "metrics-server-cert" not found Jan 29 15:44:39 crc kubenswrapper[5008]: E0129 15:44:39.666898 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:44:43 crc kubenswrapper[5008]: I0129 15:44:43.991108 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:44:43 crc kubenswrapper[5008]: I0129 15:44:43.991567 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.208580 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.215330 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.390301 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tbwkr" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.400013 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.615209 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.622666 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.721497 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wmglr" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.729627 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.021006 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.021362 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.028899 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.033422 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.153029 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wddh7" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.160906 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.354722 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.354960 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qmws6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-qs9wh_openstack-operators(cae67616-1145-4057-b304-08a322e78d9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.356245 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" podUID="cae67616-1145-4057-b304-08a322e78d9d" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.393602 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" podUID="cae67616-1145-4057-b304-08a322e78d9d" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.907412 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.907617 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twwv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-84h7l_openstack-operators(a9dfe223-8569-48bb-8b52-c3fb069208a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.908848 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" podUID="a9dfe223-8569-48bb-8b52-c3fb069208a0" Jan 29 15:44:48 crc kubenswrapper[5008]: E0129 15:44:48.401241 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" podUID="a9dfe223-8569-48bb-8b52-c3fb069208a0" Jan 29 15:44:48 crc kubenswrapper[5008]: E0129 15:44:48.488858 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:44:48 crc kubenswrapper[5008]: E0129 15:44:48.513280 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 29 15:44:48 crc kubenswrapper[5008]: E0129 15:44:48.513510 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8wxlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-9sf7f_openstack-operators(b46e3eea-2330-4b3f-b45d-34ae38a0dde9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:48 crc kubenswrapper[5008]: E0129 15:44:48.514685 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" podUID="b46e3eea-2330-4b3f-b45d-34ae38a0dde9" Jan 29 15:44:49 crc kubenswrapper[5008]: E0129 15:44:49.406959 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" podUID="b46e3eea-2330-4b3f-b45d-34ae38a0dde9" Jan 29 15:44:50 crc kubenswrapper[5008]: E0129 15:44:50.484330 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 29 15:44:50 crc kubenswrapper[5008]: E0129 15:44:50.484564 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gmdcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-44qcp_openstack-operators(14020423-5911-4b69-8889-b12267c9bbf9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:50 crc kubenswrapper[5008]: E0129 15:44:50.485767 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" podUID="14020423-5911-4b69-8889-b12267c9bbf9" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.041963 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.042143 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wk5pq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-q7khh_openstack-operators(e57e9a97-d32e-4464-b12c-ba44a4643ada): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.043430 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" podUID="e57e9a97-d32e-4464-b12c-ba44a4643ada" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.436835 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" podUID="e57e9a97-d32e-4464-b12c-ba44a4643ada" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.437212 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" podUID="14020423-5911-4b69-8889-b12267c9bbf9" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.843889 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.844083 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8hpbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-bjjwz_openstack-operators(d39876a5-4ca3-44e2-a4c5-c6541c2ec812): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.845286 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" podUID="d39876a5-4ca3-44e2-a4c5-c6541c2ec812" Jan 29 15:44:52 crc kubenswrapper[5008]: E0129 15:44:52.443139 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" podUID="d39876a5-4ca3-44e2-a4c5-c6541c2ec812" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.534086 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.534265 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bm9mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-fxz5k_openstack-operators(d4fd527b-7108-4f94-b7a9-bb0b358b8c3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.535480 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" podUID="d4fd527b-7108-4f94-b7a9-bb0b358b8c3c" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.890670 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.890999 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7jgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-qhwnb_openstack-operators(e76346a9-7ba5-4178-82b7-da9f0c337c08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.892880 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" podUID="e76346a9-7ba5-4178-82b7-da9f0c337c08" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.960739 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.960967 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6t5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6kzcj_openshift-marketplace(c82fc869-759d-4902-9aef-fdd69452b420): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.962177 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:44:54 crc kubenswrapper[5008]: E0129 15:44:54.456006 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" podUID="d4fd527b-7108-4f94-b7a9-bb0b358b8c3c" Jan 29 15:44:54 crc kubenswrapper[5008]: E0129 15:44:54.456936 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" podUID="e76346a9-7ba5-4178-82b7-da9f0c337c08" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.143463 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh"] Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.146193 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.151532 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh"] Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.154072 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.154122 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.227472 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.227609 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5tsl\" (UniqueName: \"kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.227661 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.333343 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5tsl\" (UniqueName: \"kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.333394 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.333448 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.336410 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.342677 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.349226 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5tsl\" (UniqueName: \"kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.482912 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:01 crc kubenswrapper[5008]: E0129 15:45:01.795041 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 15:45:01 crc kubenswrapper[5008]: E0129 15:45:01.795546 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9v2kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vtv85_openstack-operators(1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:45:01 crc kubenswrapper[5008]: E0129 15:45:01.796848 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podUID="1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1" Jan 29 15:45:02 crc kubenswrapper[5008]: W0129 15:45:02.434662 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44442d63_1bbc_4d1c_9e9d_2a9ad59baf59.slice/crio-b2b4b257c1e2e613d3b85d65807876417cd66f497d0755a83c9f76778be36b25 WatchSource:0}: Error finding container b2b4b257c1e2e613d3b85d65807876417cd66f497d0755a83c9f76778be36b25: Status 404 returned error can't find the container with id b2b4b257c1e2e613d3b85d65807876417cd66f497d0755a83c9f76778be36b25 Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.436481 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv"] Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.479433 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5"] Jan 29 15:45:02 crc kubenswrapper[5008]: W0129 15:45:02.490704 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ff89cd9_951e_4907_b60c_a1a1c08007a4.slice/crio-f7ebc82a36e4c12e5c6d40e020cb1fd798c35ca22216dfeeb9ce08450114850d WatchSource:0}: Error finding container f7ebc82a36e4c12e5c6d40e020cb1fd798c35ca22216dfeeb9ce08450114850d: Status 404 returned error can't find the container with id f7ebc82a36e4c12e5c6d40e020cb1fd798c35ca22216dfeeb9ce08450114850d Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.515033 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" event={"ID":"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59","Type":"ContainerStarted","Data":"b2b4b257c1e2e613d3b85d65807876417cd66f497d0755a83c9f76778be36b25"} Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.516486 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" event={"ID":"4ff89cd9-951e-4907-b60c-a1a1c08007a4","Type":"ContainerStarted","Data":"f7ebc82a36e4c12e5c6d40e020cb1fd798c35ca22216dfeeb9ce08450114850d"} Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.531515 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh"] Jan 29 15:45:02 crc kubenswrapper[5008]: W0129 15:45:02.540841 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bfb4d07_e2b9_42e2_951c_3d9f2ad23202.slice/crio-4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6 WatchSource:0}: Error finding container 4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6: Status 404 returned error can't find the container with id 4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6 Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.545543 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv"] Jan 29 15:45:02 crc kubenswrapper[5008]: W0129 15:45:02.549467 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f5d1ef8_a9b5_428a_b441_b7d763dbd102.slice/crio-f81d007337c0ae0ba3a13af9502663e1c87ee952f3d7b62a069c96af34017843 WatchSource:0}: Error finding container f81d007337c0ae0ba3a13af9502663e1c87ee952f3d7b62a069c96af34017843: Status 404 returned error can't find the container with id f81d007337c0ae0ba3a13af9502663e1c87ee952f3d7b62a069c96af34017843 Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.459732 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.460448 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkwsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9l2c6_openshift-marketplace(decefe5c-189e-43f8-88b2-f93a00567c3e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.461706 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.535365 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" event={"ID":"e57e9a97-d32e-4464-b12c-ba44a4643ada","Type":"ContainerStarted","Data":"c2d1b7d8799ed8d3a59de78318cd3be39480998c5c215d797defc6b83d404a15"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.536234 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.540207 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" event={"ID":"30b3e5fd-7f41-4ed9-a1de-cb282994ad38","Type":"ContainerStarted","Data":"b9ee149dddccb6b9517f9dba8a3e94506d3a274e3c46deca091303149a12db0b"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.540692 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.542656 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" event={"ID":"94a4547d-0c92-41e4-8ca7-64e21df1708e","Type":"ContainerStarted","Data":"64f2e5197bfbd62ff69cdf92e9b2adf305abbe989c3646b5d6c2502257b9d949"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.543082 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.544387 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" event={"ID":"6e775178-095e-451d-bded-b83f229c4231","Type":"ContainerStarted","Data":"2b112ab79d1cb7717290a5b9b1e5c0e493c3116c27a9deb5b4aade6377ffa3ab"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.544723 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.547528 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" event={"ID":"a9dfe223-8569-48bb-8b52-c3fb069208a0","Type":"ContainerStarted","Data":"e25ead091de4fe779a23ce535171ba2caba03f9b38ac241dc91dc997c469f2dd"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.547930 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.551485 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" event={"ID":"ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f","Type":"ContainerStarted","Data":"40b0246239a5efd3622070802992e9db11b54f99b0aa46b0f17e0f85ee43399b"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.551930 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.554932 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" podStartSLOduration=2.501005741 podStartE2EDuration="33.554924167s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.842214061 +0000 UTC m=+1015.515068298" lastFinishedPulling="2026-01-29 15:45:02.896132487 +0000 UTC m=+1046.568986724" observedRunningTime="2026-01-29 15:45:03.552676072 +0000 UTC m=+1047.225530319" watchObservedRunningTime="2026-01-29 15:45:03.554924167 +0000 UTC m=+1047.227778414" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.557670 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" event={"ID":"6196a4fd-8576-412f-9140-cf61b98444a4","Type":"ContainerStarted","Data":"edbeeb5eeb10303c1729a87a26721192931478d08054dbc30d7b715261ea147f"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.558244 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.560139 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" event={"ID":"4dc123ee-b76c-46a7-9aea-76457232036b","Type":"ContainerStarted","Data":"029183a84c3497c531d73ec4b22d0d8d25ae57bbac89692e9efb623ca705da2c"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.560491 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.569018 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" event={"ID":"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59","Type":"ContainerStarted","Data":"3ca73951a4eb2de3393864c3bd4a6f147981f65407fc2d62e07b35712b10a665"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.569170 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.572456 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" event={"ID":"27a92a88-ee29-47fd-b4cf-5e3232ce7573","Type":"ContainerStarted","Data":"a7daabfe5c20ff13f77d5f49309357873dfc9e580fe3d9d806a7bf1061d9fb7b"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.573043 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.578134 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" podStartSLOduration=11.351185617 podStartE2EDuration="33.578120309s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.609211214 +0000 UTC m=+1015.282065451" lastFinishedPulling="2026-01-29 15:44:53.836145906 +0000 UTC m=+1037.509000143" observedRunningTime="2026-01-29 15:45:03.575776302 +0000 UTC m=+1047.248630539" watchObservedRunningTime="2026-01-29 15:45:03.578120309 +0000 UTC m=+1047.250974546" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.588508 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" event={"ID":"68468eb9-9e76-4f2f-9aba-cc3198e0a241","Type":"ContainerStarted","Data":"dec822d4cfb2ad4f55625cd2cae1ce5a25bcd65ed8be0ea74f9941e89df21308"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.588543 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.594912 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" podStartSLOduration=4.014153501 podStartE2EDuration="33.594900155s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.391213174 +0000 UTC m=+1016.064067411" lastFinishedPulling="2026-01-29 15:45:01.971959818 +0000 UTC m=+1045.644814065" observedRunningTime="2026-01-29 15:45:03.591819479 +0000 UTC m=+1047.264673716" watchObservedRunningTime="2026-01-29 15:45:03.594900155 +0000 UTC m=+1047.267754392" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.597561 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" event={"ID":"7a610d2e-cb71-4995-a0e8-f6dc26f7664a","Type":"ContainerStarted","Data":"1b336e59666efcd1490d4a401733d6fd7317ac50554a224e54ce910bc925d425"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.597621 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.599470 5008 generic.go:334] "Generic (PLEG): container finished" podID="6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" containerID="ac0b6463e1c89ffcdf2ab1a2ad453e18c97ff25a3453de99022e4a7402303b41" exitCode=0 Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.599588 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" event={"ID":"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202","Type":"ContainerDied","Data":"ac0b6463e1c89ffcdf2ab1a2ad453e18c97ff25a3453de99022e4a7402303b41"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.599609 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" event={"ID":"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202","Type":"ContainerStarted","Data":"4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.601206 5008 generic.go:334] "Generic (PLEG): container finished" podID="014fe771-fe01-4b92-b038-862615b75136" containerID="6146763d50fe2db378760e8a9cd32d988036e3f58c7668e786dd7811a893a9b6" exitCode=0 Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.601247 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerDied","Data":"6146763d50fe2db378760e8a9cd32d988036e3f58c7668e786dd7811a893a9b6"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.609491 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" event={"ID":"a2163508-5800-4d97-b8d4-1f3815764822","Type":"ContainerStarted","Data":"62f64581f3e967e653f0f7bf82542da1b26324470d5536dae67a08ae7718103b"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.609963 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.615965 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" event={"ID":"9f5d1ef8-a9b5-428a-b441-b7d763dbd102","Type":"ContainerStarted","Data":"f81d007337c0ae0ba3a13af9502663e1c87ee952f3d7b62a069c96af34017843"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.629108 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" event={"ID":"cae67616-1145-4057-b304-08a322e78d9d","Type":"ContainerStarted","Data":"abeb5f460434b370df78fba321d8cfb4051c19265056a363be0c2c42b3403ae4"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.631225 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.635994 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" podStartSLOduration=2.987048831 podStartE2EDuration="33.635973458s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.338258953 +0000 UTC m=+1016.011113190" lastFinishedPulling="2026-01-29 15:45:02.98718358 +0000 UTC m=+1046.660037817" observedRunningTime="2026-01-29 15:45:03.63147382 +0000 UTC m=+1047.304328057" watchObservedRunningTime="2026-01-29 15:45:03.635973458 +0000 UTC m=+1047.308827695" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.651365 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" podStartSLOduration=11.424350437 podStartE2EDuration="33.65134446s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.655929824 +0000 UTC m=+1015.328784061" lastFinishedPulling="2026-01-29 15:44:53.882923807 +0000 UTC m=+1037.555778084" observedRunningTime="2026-01-29 15:45:03.649954706 +0000 UTC m=+1047.322808933" watchObservedRunningTime="2026-01-29 15:45:03.65134446 +0000 UTC m=+1047.324198697" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.674693 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" event={"ID":"cb2d6253-7fa7-41a9-9d0b-002ef590c4db","Type":"ContainerStarted","Data":"05265b3e4a002912dbd3447c2ee696494182fb85b244603e887ec83f3ef57e47"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.675894 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.684524 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" podStartSLOduration=4.078459916 podStartE2EDuration="33.684506713s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.381504789 +0000 UTC m=+1016.054359026" lastFinishedPulling="2026-01-29 15:45:01.987551586 +0000 UTC m=+1045.660405823" observedRunningTime="2026-01-29 15:45:03.674732306 +0000 UTC m=+1047.347586543" watchObservedRunningTime="2026-01-29 15:45:03.684506713 +0000 UTC m=+1047.357360950" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.740001 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" podStartSLOduration=4.161982008 podStartE2EDuration="33.739988605s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.392540936 +0000 UTC m=+1016.065395173" lastFinishedPulling="2026-01-29 15:45:01.970547533 +0000 UTC m=+1045.643401770" observedRunningTime="2026-01-29 15:45:03.738364405 +0000 UTC m=+1047.411218642" watchObservedRunningTime="2026-01-29 15:45:03.739988605 +0000 UTC m=+1047.412842842" Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.763925 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.764066 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tg272,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-z75gs_openshift-marketplace(014fe771-fe01-4b92-b038-862615b75136): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.766884 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.805107 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" podStartSLOduration=13.298796169 podStartE2EDuration="33.80509074s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.330372368 +0000 UTC m=+1015.003226605" lastFinishedPulling="2026-01-29 15:44:51.836666939 +0000 UTC m=+1035.509521176" observedRunningTime="2026-01-29 15:45:03.772071661 +0000 UTC m=+1047.444925908" watchObservedRunningTime="2026-01-29 15:45:03.80509074 +0000 UTC m=+1047.477944977" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.808424 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" podStartSLOduration=4.230673378 podStartE2EDuration="33.80841262s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.391576103 +0000 UTC m=+1016.064430340" lastFinishedPulling="2026-01-29 15:45:01.969315305 +0000 UTC m=+1045.642169582" observedRunningTime="2026-01-29 15:45:03.807092488 +0000 UTC m=+1047.479946725" watchObservedRunningTime="2026-01-29 15:45:03.80841262 +0000 UTC m=+1047.481266857" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.857980 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" podStartSLOduration=3.059110983 podStartE2EDuration="33.857967929s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.991679157 +0000 UTC m=+1015.664533394" lastFinishedPulling="2026-01-29 15:45:02.790536093 +0000 UTC m=+1046.463390340" observedRunningTime="2026-01-29 15:45:03.857200541 +0000 UTC m=+1047.530054778" watchObservedRunningTime="2026-01-29 15:45:03.857967929 +0000 UTC m=+1047.530822166" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.895482 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" podStartSLOduration=33.895465596 podStartE2EDuration="33.895465596s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:45:03.893578831 +0000 UTC m=+1047.566433068" watchObservedRunningTime="2026-01-29 15:45:03.895465596 +0000 UTC m=+1047.568319853" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.906986 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" podStartSLOduration=11.65763433 podStartE2EDuration="33.906974464s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.633485001 +0000 UTC m=+1015.306339238" lastFinishedPulling="2026-01-29 15:44:53.882825105 +0000 UTC m=+1037.555679372" observedRunningTime="2026-01-29 15:45:03.90513837 +0000 UTC m=+1047.578005468" watchObservedRunningTime="2026-01-29 15:45:03.906974464 +0000 UTC m=+1047.579828691" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.920820 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" podStartSLOduration=13.907767701000001 podStartE2EDuration="33.920804999s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.823614151 +0000 UTC m=+1015.496468388" lastFinishedPulling="2026-01-29 15:44:51.836651449 +0000 UTC m=+1035.509505686" observedRunningTime="2026-01-29 15:45:03.91831979 +0000 UTC m=+1047.591174027" watchObservedRunningTime="2026-01-29 15:45:03.920804999 +0000 UTC m=+1047.593659236" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.938353 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" podStartSLOduration=4.334894101 podStartE2EDuration="33.938338934s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.387614777 +0000 UTC m=+1016.060469014" lastFinishedPulling="2026-01-29 15:45:01.9910596 +0000 UTC m=+1045.663913847" observedRunningTime="2026-01-29 15:45:03.934351167 +0000 UTC m=+1047.607205404" watchObservedRunningTime="2026-01-29 15:45:03.938338934 +0000 UTC m=+1047.611193171" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.961654 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" podStartSLOduration=12.450714328 podStartE2EDuration="33.961637557s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.374253753 +0000 UTC m=+1016.047107990" lastFinishedPulling="2026-01-29 15:44:53.885176982 +0000 UTC m=+1037.558031219" observedRunningTime="2026-01-29 15:45:03.957835196 +0000 UTC m=+1047.630689433" watchObservedRunningTime="2026-01-29 15:45:03.961637557 +0000 UTC m=+1047.634491794" Jan 29 15:45:04 crc kubenswrapper[5008]: E0129 15:45:04.324317 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:45:04 crc kubenswrapper[5008]: I0129 15:45:04.681974 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" event={"ID":"b46e3eea-2330-4b3f-b45d-34ae38a0dde9","Type":"ContainerStarted","Data":"e74665f2cdbce441cbd0fa4745148c26ed1081999840132eae1a9cea0c76feb5"} Jan 29 15:45:04 crc kubenswrapper[5008]: E0129 15:45:04.685441 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" Jan 29 15:45:04 crc kubenswrapper[5008]: I0129 15:45:04.719380 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" podStartSLOduration=2.65305577 podStartE2EDuration="34.719361011s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.855345589 +0000 UTC m=+1015.528199826" lastFinishedPulling="2026-01-29 15:45:03.92165083 +0000 UTC m=+1047.594505067" observedRunningTime="2026-01-29 15:45:04.702279126 +0000 UTC m=+1048.375133383" watchObservedRunningTime="2026-01-29 15:45:04.719361011 +0000 UTC m=+1048.392215258" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.008110 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.114402 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume\") pod \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.114520 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume\") pod \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.114575 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5tsl\" (UniqueName: \"kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl\") pod \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.115769 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume" (OuterVolumeSpecName: "config-volume") pod "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" (UID: "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.121158 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" (UID: "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.121102 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl" (OuterVolumeSpecName: "kube-api-access-b5tsl") pod "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" (UID: "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202"). InnerVolumeSpecName "kube-api-access-b5tsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.216404 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.216446 5008 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.216459 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5tsl\" (UniqueName: \"kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.326805 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.692235 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" event={"ID":"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202","Type":"ContainerDied","Data":"4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6"} Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.692307 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.692317 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.168958 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.715599 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" event={"ID":"d39876a5-4ca3-44e2-a4c5-c6541c2ec812","Type":"ContainerStarted","Data":"3f21253bce924e7eaadfcefeb40aa20f8865fddcdd5547ea99ebd70f67299196"} Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.715954 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.716879 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" event={"ID":"4ff89cd9-951e-4907-b60c-a1a1c08007a4","Type":"ContainerStarted","Data":"67374a8df764e4400300a81cd50767b0992ed79dcfc8091f6d4bf5484b09fba2"} Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.717032 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.718286 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" event={"ID":"9f5d1ef8-a9b5-428a-b441-b7d763dbd102","Type":"ContainerStarted","Data":"fbedf6f95722853c707dece2abc3325005e22f2716b0a48ac2ddb7b1c831f4d5"} Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.718495 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.730352 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" podStartSLOduration=3.608112006 podStartE2EDuration="37.730334559s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.369837547 +0000 UTC m=+1016.042691784" lastFinishedPulling="2026-01-29 15:45:06.4920601 +0000 UTC m=+1050.164914337" observedRunningTime="2026-01-29 15:45:07.728230639 +0000 UTC m=+1051.401084876" watchObservedRunningTime="2026-01-29 15:45:07.730334559 +0000 UTC m=+1051.403188816" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.746395 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" podStartSLOduration=33.727232886 podStartE2EDuration="37.746375228s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:45:02.492940952 +0000 UTC m=+1046.165795189" lastFinishedPulling="2026-01-29 15:45:06.512083274 +0000 UTC m=+1050.184937531" observedRunningTime="2026-01-29 15:45:07.743965129 +0000 UTC m=+1051.416819396" watchObservedRunningTime="2026-01-29 15:45:07.746375228 +0000 UTC m=+1051.419229475" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.785045 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" podStartSLOduration=33.855957071 podStartE2EDuration="37.785027823s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:45:02.560874346 +0000 UTC m=+1046.233728583" lastFinishedPulling="2026-01-29 15:45:06.489945058 +0000 UTC m=+1050.162799335" observedRunningTime="2026-01-29 15:45:07.778276879 +0000 UTC m=+1051.451131126" watchObservedRunningTime="2026-01-29 15:45:07.785027823 +0000 UTC m=+1051.457882080" Jan 29 15:45:09 crc kubenswrapper[5008]: I0129 15:45:09.745148 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" event={"ID":"14020423-5911-4b69-8889-b12267c9bbf9","Type":"ContainerStarted","Data":"6b42c189843f865aeb4d6b78a6d289090887d3a0d8a4d7788b1ee3272759fde4"} Jan 29 15:45:09 crc kubenswrapper[5008]: I0129 15:45:09.745805 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:45:09 crc kubenswrapper[5008]: I0129 15:45:09.764121 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" podStartSLOduration=4.600557219 podStartE2EDuration="39.764099916s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.865339451 +0000 UTC m=+1015.538193688" lastFinishedPulling="2026-01-29 15:45:07.028882108 +0000 UTC m=+1050.701736385" observedRunningTime="2026-01-29 15:45:09.760237712 +0000 UTC m=+1053.433091939" watchObservedRunningTime="2026-01-29 15:45:09.764099916 +0000 UTC m=+1053.436954213" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.561652 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.607223 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.611007 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.634899 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.691115 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.728082 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.731486 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.751893 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" event={"ID":"d4fd527b-7108-4f94-b7a9-bb0b358b8c3c","Type":"ContainerStarted","Data":"66fad5914636c645b6adbeb3857b2a7a42c761abe9ed7666b21b08ae58584f10"} Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.752082 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.753065 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" event={"ID":"e76346a9-7ba5-4178-82b7-da9f0c337c08","Type":"ContainerStarted","Data":"965fa0b13fd2d11bb62fe5f807f525ab01ed8547b3557573aefd8a284466f1c1"} Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.753520 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.758064 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.769861 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" podStartSLOduration=2.911689296 podStartE2EDuration="40.769845799s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.378076206 +0000 UTC m=+1016.050930443" lastFinishedPulling="2026-01-29 15:45:10.236232679 +0000 UTC m=+1053.909086946" observedRunningTime="2026-01-29 15:45:10.768092197 +0000 UTC m=+1054.440946434" watchObservedRunningTime="2026-01-29 15:45:10.769845799 +0000 UTC m=+1054.442700036" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.794679 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" podStartSLOduration=2.26327123 podStartE2EDuration="40.79466348s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.872027763 +0000 UTC m=+1015.544882000" lastFinishedPulling="2026-01-29 15:45:10.403420013 +0000 UTC m=+1054.076274250" observedRunningTime="2026-01-29 15:45:10.789155527 +0000 UTC m=+1054.462009764" watchObservedRunningTime="2026-01-29 15:45:10.79466348 +0000 UTC m=+1054.467517717" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.873714 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.031062 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.050018 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.103518 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.159140 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.202304 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.227921 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.230900 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.506696 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:45:13 crc kubenswrapper[5008]: I0129 15:45:13.990697 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:45:13 crc kubenswrapper[5008]: I0129 15:45:13.990850 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:45:13 crc kubenswrapper[5008]: I0129 15:45:13.990931 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:45:13 crc kubenswrapper[5008]: I0129 15:45:13.992099 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:45:13 crc kubenswrapper[5008]: I0129 15:45:13.992586 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a" gracePeriod=600 Jan 29 15:45:16 crc kubenswrapper[5008]: E0129 15:45:16.327850 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:45:16 crc kubenswrapper[5008]: E0129 15:45:16.327852 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podUID="1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1" Jan 29 15:45:16 crc kubenswrapper[5008]: E0129 15:45:16.328076 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:45:16 crc kubenswrapper[5008]: I0129 15:45:16.415065 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:45:16 crc kubenswrapper[5008]: I0129 15:45:16.737877 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:45:18 crc kubenswrapper[5008]: E0129 15:45:18.449209 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:45:18 crc kubenswrapper[5008]: E0129 15:45:18.449380 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tg272,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-z75gs_openshift-marketplace(014fe771-fe01-4b92-b038-862615b75136): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:18 crc kubenswrapper[5008]: E0129 15:45:18.451333 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" Jan 29 15:45:20 crc kubenswrapper[5008]: I0129 15:45:20.978146 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:45:21 crc kubenswrapper[5008]: I0129 15:45:21.009122 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:45:21 crc kubenswrapper[5008]: I0129 15:45:21.465228 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:45:26 crc kubenswrapper[5008]: I0129 15:45:26.101175 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-gk9q8_ca0fcb2d-733d-4bde-9bbf-3f7082d0e244/machine-config-daemon/4.log" Jan 29 15:45:26 crc kubenswrapper[5008]: I0129 15:45:26.102713 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a" exitCode=-1 Jan 29 15:45:26 crc kubenswrapper[5008]: I0129 15:45:26.102759 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a"} Jan 29 15:45:26 crc kubenswrapper[5008]: I0129 15:45:26.102840 5008 scope.go:117] "RemoveContainer" containerID="d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef" Jan 29 15:45:27 crc kubenswrapper[5008]: E0129 15:45:27.360199 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:45:28 crc kubenswrapper[5008]: E0129 15:45:28.324989 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:45:31 crc kubenswrapper[5008]: E0129 15:45:31.326851 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" Jan 29 15:45:35 crc kubenswrapper[5008]: I0129 15:45:35.169582 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b"} Jan 29 15:45:35 crc kubenswrapper[5008]: I0129 15:45:35.171370 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" event={"ID":"1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1","Type":"ContainerStarted","Data":"dd151ea38c4064e07bdf2b218590a45c407525f6fb598dffc985f5c79d6326a7"} Jan 29 15:45:35 crc kubenswrapper[5008]: I0129 15:45:35.205731 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podStartSLOduration=2.110222511 podStartE2EDuration="1m4.205706795s" podCreationTimestamp="2026-01-29 15:44:31 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.384524023 +0000 UTC m=+1016.057378260" lastFinishedPulling="2026-01-29 15:45:34.480008307 +0000 UTC m=+1078.152862544" observedRunningTime="2026-01-29 15:45:35.199970856 +0000 UTC m=+1078.872825103" watchObservedRunningTime="2026-01-29 15:45:35.205706795 +0000 UTC m=+1078.878561042" Jan 29 15:45:39 crc kubenswrapper[5008]: E0129 15:45:39.326203 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:45:44 crc kubenswrapper[5008]: I0129 15:45:44.241344 5008 generic.go:334] "Generic (PLEG): container finished" podID="c82fc869-759d-4902-9aef-fdd69452b420" containerID="252ca65842c9d7357ac65b037452a00da92ce644c45e1b9f0b6e067af34afb31" exitCode=0 Jan 29 15:45:44 crc kubenswrapper[5008]: I0129 15:45:44.241478 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerDied","Data":"252ca65842c9d7357ac65b037452a00da92ce644c45e1b9f0b6e067af34afb31"} Jan 29 15:45:45 crc kubenswrapper[5008]: I0129 15:45:45.250912 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerStarted","Data":"aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6"} Jan 29 15:45:45 crc kubenswrapper[5008]: I0129 15:45:45.293724 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6kzcj" podStartSLOduration=2.434106012 podStartE2EDuration="1m39.29370484s" podCreationTimestamp="2026-01-29 15:44:06 +0000 UTC" firstStartedPulling="2026-01-29 15:44:07.877247959 +0000 UTC m=+991.550102196" lastFinishedPulling="2026-01-29 15:45:44.736846787 +0000 UTC m=+1088.409701024" observedRunningTime="2026-01-29 15:45:45.275353776 +0000 UTC m=+1088.948208043" watchObservedRunningTime="2026-01-29 15:45:45.29370484 +0000 UTC m=+1088.966559077" Jan 29 15:45:46 crc kubenswrapper[5008]: I0129 15:45:46.802825 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:45:46 crc kubenswrapper[5008]: I0129 15:45:46.803201 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:45:46 crc kubenswrapper[5008]: I0129 15:45:46.876980 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:45:48 crc kubenswrapper[5008]: I0129 15:45:48.274043 5008 generic.go:334] "Generic (PLEG): container finished" podID="014fe771-fe01-4b92-b038-862615b75136" containerID="b091a2c3cf526d0bdb7bf3376685f7d0e8e07a65ed76f6cc14da757b75460432" exitCode=0 Jan 29 15:45:48 crc kubenswrapper[5008]: I0129 15:45:48.274134 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerDied","Data":"b091a2c3cf526d0bdb7bf3376685f7d0e8e07a65ed76f6cc14da757b75460432"} Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.283049 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerStarted","Data":"ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16"} Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.302465 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z75gs" podStartSLOduration=31.160997991 podStartE2EDuration="1m16.30245s" podCreationTimestamp="2026-01-29 15:44:33 +0000 UTC" firstStartedPulling="2026-01-29 15:45:03.602516299 +0000 UTC m=+1047.275370536" lastFinishedPulling="2026-01-29 15:45:48.743968278 +0000 UTC m=+1092.416822545" observedRunningTime="2026-01-29 15:45:49.30163862 +0000 UTC m=+1092.974492867" watchObservedRunningTime="2026-01-29 15:45:49.30245 +0000 UTC m=+1092.975304237" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.497136 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:45:49 crc kubenswrapper[5008]: E0129 15:45:49.497412 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" containerName="collect-profiles" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.497423 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" containerName="collect-profiles" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.497550 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" containerName="collect-profiles" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.501243 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.503022 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.504442 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-96kv2" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.505533 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.505652 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.514936 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.585017 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.586084 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.588187 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.599941 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.616005 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfht\" (UniqueName: \"kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.616144 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717089 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717347 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717507 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqfht\" (UniqueName: \"kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717543 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhgbd\" (UniqueName: \"kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717567 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717999 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.737552 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqfht\" (UniqueName: \"kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.818919 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.819002 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhgbd\" (UniqueName: \"kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.819029 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.819697 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.819741 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.821698 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.849280 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhgbd\" (UniqueName: \"kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.906021 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:50 crc kubenswrapper[5008]: I0129 15:45:50.262880 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:45:50 crc kubenswrapper[5008]: I0129 15:45:50.289100 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" event={"ID":"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078","Type":"ContainerStarted","Data":"7dcfe1c84af859609b7cd8621d352272c552ebce1b442395a6dd0d1578eb8603"} Jan 29 15:45:50 crc kubenswrapper[5008]: I0129 15:45:50.410759 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:45:50 crc kubenswrapper[5008]: W0129 15:45:50.421435 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3db905f0_53de_4983_b70f_c883bfe123ba.slice/crio-30c8df1cacab6aea0ebe156b76659c8cb48d207b8fd5bb6861527a9757db6348 WatchSource:0}: Error finding container 30c8df1cacab6aea0ebe156b76659c8cb48d207b8fd5bb6861527a9757db6348: Status 404 returned error can't find the container with id 30c8df1cacab6aea0ebe156b76659c8cb48d207b8fd5bb6861527a9757db6348 Jan 29 15:45:51 crc kubenswrapper[5008]: I0129 15:45:51.303563 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" event={"ID":"3db905f0-53de-4983-b70f-c883bfe123ba","Type":"ContainerStarted","Data":"30c8df1cacab6aea0ebe156b76659c8cb48d207b8fd5bb6861527a9757db6348"} Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.354995 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.384314 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.385339 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.411639 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.558507 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2jc\" (UniqueName: \"kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.558582 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.558610 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.588990 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.614883 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.622039 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.649244 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.660932 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt2jc\" (UniqueName: \"kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.660995 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.661013 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.661925 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.664640 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.691710 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt2jc\" (UniqueName: \"kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.724166 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.768301 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.768432 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fqk\" (UniqueName: \"kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.768527 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.870476 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.870543 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75fqk\" (UniqueName: \"kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.870574 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.872805 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.872892 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.911102 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75fqk\" (UniqueName: \"kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.971855 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.215119 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.318503 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" event={"ID":"eaa396b6-206d-4e0f-8983-ee9ac16c910a","Type":"ContainerStarted","Data":"309fd497280f26c9fefa297dd5016a654c256866d10ab9c20a829153df0b8be3"} Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.398396 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:45:53 crc kubenswrapper[5008]: W0129 15:45:53.403436 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd528ee94_b499_4f20_8603_6dcc9e8b0361.slice/crio-7e40b85878fc9eb94adb0dc672f4b4d3fd0475b78dd43bc83dd4dd513c313465 WatchSource:0}: Error finding container 7e40b85878fc9eb94adb0dc672f4b4d3fd0475b78dd43bc83dd4dd513c313465: Status 404 returned error can't find the container with id 7e40b85878fc9eb94adb0dc672f4b4d3fd0475b78dd43bc83dd4dd513c313465 Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.480244 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.481563 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.484442 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.488771 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.490422 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.490429 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.490603 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7kjkn" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.492650 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.493088 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.493611 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579401 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579450 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-config-data\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579492 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579520 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579536 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c8683a3-18f6-4242-9991-b542aed9143b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579562 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579578 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579595 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8s6q\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-kube-api-access-w8s6q\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579618 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579639 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579657 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c8683a3-18f6-4242-9991-b542aed9143b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680773 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680835 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c8683a3-18f6-4242-9991-b542aed9143b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680866 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680885 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680907 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8s6q\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-kube-api-access-w8s6q\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680935 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680957 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680976 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c8683a3-18f6-4242-9991-b542aed9143b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680998 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.681016 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-config-data\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.681052 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.681390 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.682691 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.683311 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.683635 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.684012 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-config-data\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.685812 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.686943 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c8683a3-18f6-4242-9991-b542aed9143b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.687363 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.687475 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c8683a3-18f6-4242-9991-b542aed9143b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.687678 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.696577 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8s6q\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-kube-api-access-w8s6q\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.739289 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.745342 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.746948 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.749471 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.749665 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.749936 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.749975 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tfhm4" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.750148 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.750365 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.753196 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.756137 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.811173 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.835736 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.835805 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.884045 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886114 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhdc9\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-kube-api-access-vhdc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886196 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4dcd0990-beb1-445a-b387-b2b78c1a39d2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886271 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886893 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886935 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886968 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886991 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.887021 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4dcd0990-beb1-445a-b387-b2b78c1a39d2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.887065 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.887106 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.887147 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988702 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988797 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhdc9\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-kube-api-access-vhdc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988820 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4dcd0990-beb1-445a-b387-b2b78c1a39d2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988842 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988871 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988902 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988922 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988961 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988981 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4dcd0990-beb1-445a-b387-b2b78c1a39d2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.989013 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.989035 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.989374 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.989423 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.989928 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.990487 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.990493 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.990529 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.995030 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4dcd0990-beb1-445a-b387-b2b78c1a39d2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.997640 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.003923 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4dcd0990-beb1-445a-b387-b2b78c1a39d2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.007695 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.012019 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhdc9\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-kube-api-access-vhdc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.024460 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.083085 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.304458 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.328558 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" event={"ID":"d528ee94-b499-4f20-8603-6dcc9e8b0361","Type":"ContainerStarted","Data":"7e40b85878fc9eb94adb0dc672f4b4d3fd0475b78dd43bc83dd4dd513c313465"} Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.386498 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.429677 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.517161 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.042042 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.043624 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.089459 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.089974 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.090339 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-gx87v" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.090645 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.092122 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.104376 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.114899 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2958b99-a5fe-447a-93cc-64bade998854-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.114952 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.114983 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.115060 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.115120 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.122298 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.122458 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.122553 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmr5g\" (UniqueName: \"kubernetes.io/projected/a2958b99-a5fe-447a-93cc-64bade998854-kube-api-access-xmr5g\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.223875 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.223970 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.223999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224036 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmr5g\" (UniqueName: \"kubernetes.io/projected/a2958b99-a5fe-447a-93cc-64bade998854-kube-api-access-xmr5g\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224064 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2958b99-a5fe-447a-93cc-64bade998854-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224093 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224125 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224160 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224201 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224664 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2958b99-a5fe-447a-93cc-64bade998854-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.225094 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.225186 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.225681 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.233267 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.233300 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.241084 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmr5g\" (UniqueName: \"kubernetes.io/projected/a2958b99-a5fe-447a-93cc-64bade998854-kube-api-access-xmr5g\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.247296 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.429610 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.348765 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="registry-server" containerID="cri-o://ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" gracePeriod=2 Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.420921 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.422400 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.424715 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.426221 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zdf89" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.427156 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.427561 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.432971 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.445903 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.445956 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.445979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.446004 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.446029 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25mkv\" (UniqueName: \"kubernetes.io/projected/2c8d6871-1129-4597-8a1e-94006a17448a-kube-api-access-25mkv\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.446055 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.446082 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.446118 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.548730 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.548809 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.548855 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.548951 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.548980 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.549004 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.549032 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.549057 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25mkv\" (UniqueName: \"kubernetes.io/projected/2c8d6871-1129-4597-8a1e-94006a17448a-kube-api-access-25mkv\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.549365 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.550323 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.550411 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.550476 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.551350 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.553624 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.564636 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.574677 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25mkv\" (UniqueName: \"kubernetes.io/projected/2c8d6871-1129-4597-8a1e-94006a17448a-kube-api-access-25mkv\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.601135 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.747612 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.769067 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.770259 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.772828 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2pxmp" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.773721 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.774982 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.784082 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.861575 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.861628 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.861681 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-config-data\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.861736 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-kolla-config\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.861756 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvf98\" (UniqueName: \"kubernetes.io/projected/b37ef43d-23ae-4a9c-af60-e616882400c3-kube-api-access-fvf98\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.866836 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.911188 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.965601 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-kolla-config\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.965650 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvf98\" (UniqueName: \"kubernetes.io/projected/b37ef43d-23ae-4a9c-af60-e616882400c3-kube-api-access-fvf98\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.965671 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.965700 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.965760 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-config-data\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.966806 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-config-data\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.966905 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-kolla-config\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.970428 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.970713 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.985527 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvf98\" (UniqueName: \"kubernetes.io/projected/b37ef43d-23ae-4a9c-af60-e616882400c3-kube-api-access-fvf98\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:57 crc kubenswrapper[5008]: I0129 15:45:57.091459 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 15:45:57 crc kubenswrapper[5008]: I0129 15:45:57.363700 5008 generic.go:334] "Generic (PLEG): container finished" podID="014fe771-fe01-4b92-b038-862615b75136" containerID="ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" exitCode=0 Jan 29 15:45:57 crc kubenswrapper[5008]: I0129 15:45:57.363859 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerDied","Data":"ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16"} Jan 29 15:45:57 crc kubenswrapper[5008]: I0129 15:45:57.364275 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="registry-server" containerID="cri-o://aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" gracePeriod=2 Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.371592 5008 generic.go:334] "Generic (PLEG): container finished" podID="c82fc869-759d-4902-9aef-fdd69452b420" containerID="aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" exitCode=0 Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.371636 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerDied","Data":"aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6"} Jan 29 15:45:58 crc kubenswrapper[5008]: W0129 15:45:58.579166 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dcd0990_beb1_445a_b387_b2b78c1a39d2.slice/crio-d56794076480b52f81cf8a5c95101559ed249b3bb7ac736f6b6e673f01eb9a6f WatchSource:0}: Error finding container d56794076480b52f81cf8a5c95101559ed249b3bb7ac736f6b6e673f01eb9a6f: Status 404 returned error can't find the container with id d56794076480b52f81cf8a5c95101559ed249b3bb7ac736f6b6e673f01eb9a6f Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.644161 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.645303 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.654791 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-4fqm2" Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.658091 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.711544 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzp55\" (UniqueName: \"kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55\") pod \"kube-state-metrics-0\" (UID: \"2691fca5-fe1e-4796-bf43-7135e9d5a198\") " pod="openstack/kube-state-metrics-0" Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.813120 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzp55\" (UniqueName: \"kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55\") pod \"kube-state-metrics-0\" (UID: \"2691fca5-fe1e-4796-bf43-7135e9d5a198\") " pod="openstack/kube-state-metrics-0" Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.832203 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzp55\" (UniqueName: \"kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55\") pod \"kube-state-metrics-0\" (UID: \"2691fca5-fe1e-4796-bf43-7135e9d5a198\") " pod="openstack/kube-state-metrics-0" Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.996094 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 15:45:59 crc kubenswrapper[5008]: I0129 15:45:59.379312 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dcd0990-beb1-445a-b387-b2b78c1a39d2","Type":"ContainerStarted","Data":"d56794076480b52f81cf8a5c95101559ed249b3bb7ac736f6b6e673f01eb9a6f"} Jan 29 15:45:59 crc kubenswrapper[5008]: W0129 15:45:59.394074 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c8683a3_18f6_4242_9991_b542aed9143b.slice/crio-2811d2fc177f55081ce3ed3924ed922d08db70a9a6e876627e25d9035bac49e2 WatchSource:0}: Error finding container 2811d2fc177f55081ce3ed3924ed922d08db70a9a6e876627e25d9035bac49e2: Status 404 returned error can't find the container with id 2811d2fc177f55081ce3ed3924ed922d08db70a9a6e876627e25d9035bac49e2 Jan 29 15:46:00 crc kubenswrapper[5008]: I0129 15:46:00.386479 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8c8683a3-18f6-4242-9991-b542aed9143b","Type":"ContainerStarted","Data":"2811d2fc177f55081ce3ed3924ed922d08db70a9a6e876627e25d9035bac49e2"} Jan 29 15:46:02 crc kubenswrapper[5008]: I0129 15:46:02.992591 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bw9wr"] Jan 29 15:46:02 crc kubenswrapper[5008]: I0129 15:46:02.994216 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.000576 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.000885 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4kjfp" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.000974 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.019774 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bw9wr"] Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087075 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpldm\" (UniqueName: \"kubernetes.io/projected/0dd702c8-269b-4fb6-a3a7-03adf93d916a-kube-api-access-lpldm\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087407 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087443 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd702c8-269b-4fb6-a3a7-03adf93d916a-scripts\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087506 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-log-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087525 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-ovn-controller-tls-certs\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087580 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087601 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-combined-ca-bundle\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.098700 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-k5zwb"] Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.100249 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.121441 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k5zwb"] Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.188593 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb07a603-1696-4378-8d99-382d5bc152da-scripts\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.188706 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-log-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.188727 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-ovn-controller-tls-certs\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.188759 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-log\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189442 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-log-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189714 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189746 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-combined-ca-bundle\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189767 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-run\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189805 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-lib\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189831 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpldm\" (UniqueName: \"kubernetes.io/projected/0dd702c8-269b-4fb6-a3a7-03adf93d916a-kube-api-access-lpldm\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189957 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-etc-ovs\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189978 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pfz\" (UniqueName: \"kubernetes.io/projected/fb07a603-1696-4378-8d99-382d5bc152da-kube-api-access-c8pfz\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.190043 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.190069 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd702c8-269b-4fb6-a3a7-03adf93d916a-scripts\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.190228 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.190418 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.196276 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd702c8-269b-4fb6-a3a7-03adf93d916a-scripts\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.199558 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-ovn-controller-tls-certs\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.211463 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-combined-ca-bundle\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.217859 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpldm\" (UniqueName: \"kubernetes.io/projected/0dd702c8-269b-4fb6-a3a7-03adf93d916a-kube-api-access-lpldm\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292553 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-etc-ovs\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292592 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pfz\" (UniqueName: \"kubernetes.io/projected/fb07a603-1696-4378-8d99-382d5bc152da-kube-api-access-c8pfz\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292631 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb07a603-1696-4378-8d99-382d5bc152da-scripts\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292662 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-log\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292679 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-run\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292699 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-lib\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.293186 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-lib\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.293184 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-etc-ovs\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.293241 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-run\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.293245 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-log\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.294519 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb07a603-1696-4378-8d99-382d5bc152da-scripts\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.311763 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pfz\" (UniqueName: \"kubernetes.io/projected/fb07a603-1696-4378-8d99-382d5bc152da-kube-api-access-c8pfz\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.319494 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.417173 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.799335 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.807117 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.811288 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.812144 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.812363 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.812591 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-52kcd" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.816995 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 15:46:03 crc kubenswrapper[5008]: E0129 15:46:03.836916 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16 is running failed: container process not found" containerID="ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:03 crc kubenswrapper[5008]: E0129 15:46:03.838281 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16 is running failed: container process not found" containerID="ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.839080 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 15:46:03 crc kubenswrapper[5008]: E0129 15:46:03.839177 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16 is running failed: container process not found" containerID="ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:03 crc kubenswrapper[5008]: E0129 15:46:03.839241 5008 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="registry-server" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.904458 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rrwg\" (UniqueName: \"kubernetes.io/projected/4d502938-9e22-4a6c-951e-b476cb87ee8f-kube-api-access-8rrwg\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905261 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905354 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905417 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905496 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905576 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905671 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905724 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006655 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006709 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006763 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006820 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006846 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006861 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006889 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006926 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rrwg\" (UniqueName: \"kubernetes.io/projected/4d502938-9e22-4a6c-951e-b476cb87ee8f-kube-api-access-8rrwg\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.007150 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.007433 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.008012 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.008862 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.012380 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.013838 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.017336 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.031803 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rrwg\" (UniqueName: \"kubernetes.io/projected/4d502938-9e22-4a6c-951e-b476cb87ee8f-kube-api-access-8rrwg\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.036944 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.133250 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.648904 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.719199 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content\") pod \"014fe771-fe01-4b92-b038-862615b75136\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.719281 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg272\" (UniqueName: \"kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272\") pod \"014fe771-fe01-4b92-b038-862615b75136\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.719312 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities\") pod \"014fe771-fe01-4b92-b038-862615b75136\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.720968 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities" (OuterVolumeSpecName: "utilities") pod "014fe771-fe01-4b92-b038-862615b75136" (UID: "014fe771-fe01-4b92-b038-862615b75136"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.753291 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272" (OuterVolumeSpecName: "kube-api-access-tg272") pod "014fe771-fe01-4b92-b038-862615b75136" (UID: "014fe771-fe01-4b92-b038-862615b75136"). InnerVolumeSpecName "kube-api-access-tg272". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.766295 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "014fe771-fe01-4b92-b038-862615b75136" (UID: "014fe771-fe01-4b92-b038-862615b75136"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.822101 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg272\" (UniqueName: \"kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.822137 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.822148 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.433056 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerDied","Data":"3d4dceb557efb379fc43836d7c0b6854e7a45385d099f1155ac83813cd0b127b"} Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.433105 5008 scope.go:117] "RemoveContainer" containerID="ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.433122 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.453900 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.462169 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.469543 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 15:46:05 crc kubenswrapper[5008]: E0129 15:46:05.471612 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="extract-utilities" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.471744 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="extract-utilities" Jan 29 15:46:05 crc kubenswrapper[5008]: E0129 15:46:05.471891 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="registry-server" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.471974 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="registry-server" Jan 29 15:46:05 crc kubenswrapper[5008]: E0129 15:46:05.472062 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="extract-content" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.472144 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="extract-content" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.472417 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="registry-server" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.473363 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.478112 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.478187 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.478406 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.478617 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-d65gd" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.493615 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540706 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540746 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540769 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540825 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540845 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540891 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkx2s\" (UniqueName: \"kubernetes.io/projected/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-kube-api-access-nkx2s\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540943 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540960 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-config\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642468 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642555 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642582 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642637 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkx2s\" (UniqueName: \"kubernetes.io/projected/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-kube-api-access-nkx2s\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642714 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642742 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-config\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642834 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642872 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642955 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.644420 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.645034 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-config\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.646324 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.649684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.657013 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.662532 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.667407 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkx2s\" (UniqueName: \"kubernetes.io/projected/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-kube-api-access-nkx2s\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.682990 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.794188 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:06 crc kubenswrapper[5008]: E0129 15:46:06.803385 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6 is running failed: container process not found" containerID="aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:06 crc kubenswrapper[5008]: E0129 15:46:06.803762 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6 is running failed: container process not found" containerID="aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:06 crc kubenswrapper[5008]: E0129 15:46:06.804141 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6 is running failed: container process not found" containerID="aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:06 crc kubenswrapper[5008]: E0129 15:46:06.804172 5008 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="registry-server" Jan 29 15:46:07 crc kubenswrapper[5008]: I0129 15:46:07.339869 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="014fe771-fe01-4b92-b038-862615b75136" path="/var/lib/kubelet/pods/014fe771-fe01-4b92-b038-862615b75136/volumes" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.125849 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.219161 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6t5h\" (UniqueName: \"kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h\") pod \"c82fc869-759d-4902-9aef-fdd69452b420\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.219240 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities\") pod \"c82fc869-759d-4902-9aef-fdd69452b420\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.219319 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content\") pod \"c82fc869-759d-4902-9aef-fdd69452b420\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.223594 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities" (OuterVolumeSpecName: "utilities") pod "c82fc869-759d-4902-9aef-fdd69452b420" (UID: "c82fc869-759d-4902-9aef-fdd69452b420"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.228342 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h" (OuterVolumeSpecName: "kube-api-access-m6t5h") pod "c82fc869-759d-4902-9aef-fdd69452b420" (UID: "c82fc869-759d-4902-9aef-fdd69452b420"). InnerVolumeSpecName "kube-api-access-m6t5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.276848 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c82fc869-759d-4902-9aef-fdd69452b420" (UID: "c82fc869-759d-4902-9aef-fdd69452b420"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.320758 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.320804 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6t5h\" (UniqueName: \"kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.320815 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.468378 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerDied","Data":"debd562bbbd639021d945b4eafb3e69ca2ec6a19be12a7aeaf5f75ffdbc60792"} Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.468458 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.501018 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.507867 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:46:11 crc kubenswrapper[5008]: E0129 15:46:10.998993 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 15:46:11 crc kubenswrapper[5008]: E0129 15:46:10.999179 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhgbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-s5tkh_openstack(3db905f0-53de-4983-b70f-c883bfe123ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:46:11 crc kubenswrapper[5008]: E0129 15:46:11.000492 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" podUID="3db905f0-53de-4983-b70f-c883bfe123ba" Jan 29 15:46:11 crc kubenswrapper[5008]: I0129 15:46:11.335139 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82fc869-759d-4902-9aef-fdd69452b420" path="/var/lib/kubelet/pods/c82fc869-759d-4902-9aef-fdd69452b420/volumes" Jan 29 15:46:12 crc kubenswrapper[5008]: E0129 15:46:12.139147 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 15:46:12 crc kubenswrapper[5008]: E0129 15:46:12.139687 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqfht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-d4fhx_openstack(b128f8df-0b1b-4062-9c3d-fd0f1d2e8078): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:46:12 crc kubenswrapper[5008]: E0129 15:46:12.140919 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" podUID="b128f8df-0b1b-4062-9c3d-fd0f1d2e8078" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.184882 5008 scope.go:117] "RemoveContainer" containerID="b091a2c3cf526d0bdb7bf3376685f7d0e8e07a65ed76f6cc14da757b75460432" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.227185 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.334411 5008 scope.go:117] "RemoveContainer" containerID="6146763d50fe2db378760e8a9cd32d988036e3f58c7668e786dd7811a893a9b6" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.351512 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc\") pod \"3db905f0-53de-4983-b70f-c883bfe123ba\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.351561 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhgbd\" (UniqueName: \"kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd\") pod \"3db905f0-53de-4983-b70f-c883bfe123ba\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.351640 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config\") pod \"3db905f0-53de-4983-b70f-c883bfe123ba\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.353068 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config" (OuterVolumeSpecName: "config") pod "3db905f0-53de-4983-b70f-c883bfe123ba" (UID: "3db905f0-53de-4983-b70f-c883bfe123ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.353534 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3db905f0-53de-4983-b70f-c883bfe123ba" (UID: "3db905f0-53de-4983-b70f-c883bfe123ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.377885 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd" (OuterVolumeSpecName: "kube-api-access-hhgbd") pod "3db905f0-53de-4983-b70f-c883bfe123ba" (UID: "3db905f0-53de-4983-b70f-c883bfe123ba"). InnerVolumeSpecName "kube-api-access-hhgbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.378731 5008 scope.go:117] "RemoveContainer" containerID="aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.454690 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.454722 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhgbd\" (UniqueName: \"kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.454732 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.470794 5008 scope.go:117] "RemoveContainer" containerID="252ca65842c9d7357ac65b037452a00da92ce644c45e1b9f0b6e067af34afb31" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.489688 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" event={"ID":"3db905f0-53de-4983-b70f-c883bfe123ba","Type":"ContainerDied","Data":"30c8df1cacab6aea0ebe156b76659c8cb48d207b8fd5bb6861527a9757db6348"} Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.489709 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.537672 5008 scope.go:117] "RemoveContainer" containerID="5c142c008e193f2bb446f8c2889a9aba1d36db2e12bc749c5dffba8460d0aa0d" Jan 29 15:46:12 crc kubenswrapper[5008]: W0129 15:46:12.547489 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37ef43d_23ae_4a9c_af60_e616882400c3.slice/crio-ac71b5bf97b9cf8921f573ccae642ba919cab6ddc9a98574602966d2545b52f1 WatchSource:0}: Error finding container ac71b5bf97b9cf8921f573ccae642ba919cab6ddc9a98574602966d2545b52f1: Status 404 returned error can't find the container with id ac71b5bf97b9cf8921f573ccae642ba919cab6ddc9a98574602966d2545b52f1 Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.557086 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.593305 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.609584 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.693504 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.794941 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bw9wr"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.800106 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.977837 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.093164 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 15:46:13 crc kubenswrapper[5008]: W0129 15:46:13.139064 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea8d28cd_76d6_4a6e_b6bd_a0e5f0fc2106.slice/crio-c125429a61b706e12625bef274378b043b0f932bfda0c2755b53e7ee232b5f0e WatchSource:0}: Error finding container c125429a61b706e12625bef274378b043b0f932bfda0c2755b53e7ee232b5f0e: Status 404 returned error can't find the container with id c125429a61b706e12625bef274378b043b0f932bfda0c2755b53e7ee232b5f0e Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.194079 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k5zwb"] Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.235187 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.339013 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db905f0-53de-4983-b70f-c883bfe123ba" path="/var/lib/kubelet/pods/3db905f0-53de-4983-b70f-c883bfe123ba/volumes" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.374270 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config\") pod \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.374412 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqfht\" (UniqueName: \"kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht\") pod \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.375028 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config" (OuterVolumeSpecName: "config") pod "b128f8df-0b1b-4062-9c3d-fd0f1d2e8078" (UID: "b128f8df-0b1b-4062-9c3d-fd0f1d2e8078"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.432980 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht" (OuterVolumeSpecName: "kube-api-access-sqfht") pod "b128f8df-0b1b-4062-9c3d-fd0f1d2e8078" (UID: "b128f8df-0b1b-4062-9c3d-fd0f1d2e8078"). InnerVolumeSpecName "kube-api-access-sqfht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.476842 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqfht\" (UniqueName: \"kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.476928 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.505820 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c8d6871-1129-4597-8a1e-94006a17448a","Type":"ContainerStarted","Data":"de23257238bb8ce8aeab1bd141180cc6d2ae7c211dfd51f10facebf0c4eb8ac7"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.509821 5008 generic.go:334] "Generic (PLEG): container finished" podID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerID="ff4985a668c8ef886a12f2fd99e8abf04774b488c8fa43886cb72f524385e4cb" exitCode=0 Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.509947 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" event={"ID":"eaa396b6-206d-4e0f-8983-ee9ac16c910a","Type":"ContainerDied","Data":"ff4985a668c8ef886a12f2fd99e8abf04774b488c8fa43886cb72f524385e4cb"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.511695 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2691fca5-fe1e-4796-bf43-7135e9d5a198","Type":"ContainerStarted","Data":"7986044eeb1cbc11c730082d941ee043dc7374de8a33bf15addb097a4c50eaac"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.513277 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106","Type":"ContainerStarted","Data":"c125429a61b706e12625bef274378b043b0f932bfda0c2755b53e7ee232b5f0e"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.518084 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b37ef43d-23ae-4a9c-af60-e616882400c3","Type":"ContainerStarted","Data":"ac71b5bf97b9cf8921f573ccae642ba919cab6ddc9a98574602966d2545b52f1"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.520737 5008 generic.go:334] "Generic (PLEG): container finished" podID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerID="074d5cb2df57c15195252921a34c3156f30decbbef34cf2601f7fc1b8f4751b1" exitCode=0 Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.520857 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" event={"ID":"d528ee94-b499-4f20-8603-6dcc9e8b0361","Type":"ContainerDied","Data":"074d5cb2df57c15195252921a34c3156f30decbbef34cf2601f7fc1b8f4751b1"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.523836 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bw9wr" event={"ID":"0dd702c8-269b-4fb6-a3a7-03adf93d916a","Type":"ContainerStarted","Data":"3cb944a8731235c1cca254dffa2e6f80c60f7af805b3b716839e7a4b6a0131d1"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.525663 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" event={"ID":"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078","Type":"ContainerDied","Data":"7dcfe1c84af859609b7cd8621d352272c552ebce1b442395a6dd0d1578eb8603"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.525845 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.540856 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2958b99-a5fe-447a-93cc-64bade998854","Type":"ContainerStarted","Data":"d7c2a679600cd5acbad60649171c7cd134a1e58fbdc25f09279c839e2d796043"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.566963 5008 generic.go:334] "Generic (PLEG): container finished" podID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerID="e32fe63a0f361be2992d303fb8560c37887275468835e55857ba8a6b44bc5268" exitCode=0 Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.567026 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerDied","Data":"e32fe63a0f361be2992d303fb8560c37887275468835e55857ba8a6b44bc5268"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.575102 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5zwb" event={"ID":"fb07a603-1696-4378-8d99-382d5bc152da","Type":"ContainerStarted","Data":"79074bae8ec62d3b676b76d5840f804063a918226c2f886466e6cceb9fb6bd34"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.650866 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.664042 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.898254 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.585967 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" event={"ID":"eaa396b6-206d-4e0f-8983-ee9ac16c910a","Type":"ContainerStarted","Data":"fce6b4dc39656ca4bbcce1eca3bc51906673b6595ed9fbcce86af693837a7c36"} Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.586338 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.591550 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dcd0990-beb1-445a-b387-b2b78c1a39d2","Type":"ContainerStarted","Data":"2c6fa5d16085f47a1816e6e7356d1268ade8fe801f24fc04ea91e56e48e6806c"} Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.593488 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8c8683a3-18f6-4242-9991-b542aed9143b","Type":"ContainerStarted","Data":"a8bec1298ff14291e2bcc81bb72e60423454e3549e3617dfc368a5ff2649831f"} Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.596791 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" event={"ID":"d528ee94-b499-4f20-8603-6dcc9e8b0361","Type":"ContainerStarted","Data":"41e80ea40d300659d460b8dae3a7e24635694097a722b56e704158aae123525e"} Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.597170 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.606772 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" podStartSLOduration=3.496525097 podStartE2EDuration="22.606755736s" podCreationTimestamp="2026-01-29 15:45:52 +0000 UTC" firstStartedPulling="2026-01-29 15:45:53.229660107 +0000 UTC m=+1096.902514344" lastFinishedPulling="2026-01-29 15:46:12.339890746 +0000 UTC m=+1116.012744983" observedRunningTime="2026-01-29 15:46:14.602983715 +0000 UTC m=+1118.275837952" watchObservedRunningTime="2026-01-29 15:46:14.606755736 +0000 UTC m=+1118.279609973" Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.681972 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" podStartSLOduration=3.747683529 podStartE2EDuration="22.68195117s" podCreationTimestamp="2026-01-29 15:45:52 +0000 UTC" firstStartedPulling="2026-01-29 15:45:53.405727157 +0000 UTC m=+1097.078581404" lastFinishedPulling="2026-01-29 15:46:12.339994808 +0000 UTC m=+1116.012849045" observedRunningTime="2026-01-29 15:46:14.672185864 +0000 UTC m=+1118.345040121" watchObservedRunningTime="2026-01-29 15:46:14.68195117 +0000 UTC m=+1118.354805407" Jan 29 15:46:15 crc kubenswrapper[5008]: I0129 15:46:15.331832 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b128f8df-0b1b-4062-9c3d-fd0f1d2e8078" path="/var/lib/kubelet/pods/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078/volumes" Jan 29 15:46:15 crc kubenswrapper[5008]: I0129 15:46:15.607197 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4d502938-9e22-4a6c-951e-b476cb87ee8f","Type":"ContainerStarted","Data":"055c119d6c3b38d87cb3eb25681a7b20c4fff4007d8b206b5a33e53857505a8d"} Jan 29 15:46:21 crc kubenswrapper[5008]: I0129 15:46:21.655658 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerStarted","Data":"fe84ae8c70bf02c4e800e24fb21b8ef0fd34cc6225eaec2832f3c97a133d05fb"} Jan 29 15:46:21 crc kubenswrapper[5008]: I0129 15:46:21.686514 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9l2c6" podStartSLOduration=5.059012921 podStartE2EDuration="1m59.686493369s" podCreationTimestamp="2026-01-29 15:44:22 +0000 UTC" firstStartedPulling="2026-01-29 15:44:24.134923367 +0000 UTC m=+1007.807777614" lastFinishedPulling="2026-01-29 15:46:18.762403785 +0000 UTC m=+1122.435258062" observedRunningTime="2026-01-29 15:46:21.686055998 +0000 UTC m=+1125.358910245" watchObservedRunningTime="2026-01-29 15:46:21.686493369 +0000 UTC m=+1125.359347616" Jan 29 15:46:22 crc kubenswrapper[5008]: I0129 15:46:22.684812 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:22 crc kubenswrapper[5008]: I0129 15:46:22.686322 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:22 crc kubenswrapper[5008]: I0129 15:46:22.727021 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:46:22 crc kubenswrapper[5008]: I0129 15:46:22.978317 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.022927 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.671923 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b37ef43d-23ae-4a9c-af60-e616882400c3","Type":"ContainerStarted","Data":"96461c78d0f5c7bcb23c8e1e5a587ad4eddb2cb92af7ccc28efdea87998e8286"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.672352 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.673246 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c8d6871-1129-4597-8a1e-94006a17448a","Type":"ContainerStarted","Data":"5dfcdea1095ee2d3879ba921942b33575acdace6db8ae39b151b1c219157edc2"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.674429 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bw9wr" event={"ID":"0dd702c8-269b-4fb6-a3a7-03adf93d916a","Type":"ContainerStarted","Data":"4c011b9053a81c0b12cffe67218f924b7e8abcd04528d119ed8892ef660b7e19"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.674818 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.676204 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4d502938-9e22-4a6c-951e-b476cb87ee8f","Type":"ContainerStarted","Data":"7ef0619f99a70223bdd92df7c6c63223e72c780800e32fc056d88db81748337a"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.677208 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106","Type":"ContainerStarted","Data":"6c80d9650f93ccf1ee0a61414781394325e3257f3cfa6be947295bed5e1e4e97"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.678381 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2691fca5-fe1e-4796-bf43-7135e9d5a198","Type":"ContainerStarted","Data":"9e1a6f84d62e1a65b8306defe6e32b9e1a35b50bcd62a48cbe68e10cb95676c7"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.678504 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.679889 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2958b99-a5fe-447a-93cc-64bade998854","Type":"ContainerStarted","Data":"4fb6ed72bca123054fb804f9974ec317326298fe7e9c9208c5b3b6c813fe0609"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.681091 5008 generic.go:334] "Generic (PLEG): container finished" podID="fb07a603-1696-4378-8d99-382d5bc152da" containerID="a9c3df6ce45ce01e23a674a545d9a91984df9bf8df7e1312c315e20a4b729728" exitCode=0 Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.681131 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5zwb" event={"ID":"fb07a603-1696-4378-8d99-382d5bc152da","Type":"ContainerDied","Data":"a9c3df6ce45ce01e23a674a545d9a91984df9bf8df7e1312c315e20a4b729728"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.681323 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="dnsmasq-dns" containerID="cri-o://fce6b4dc39656ca4bbcce1eca3bc51906673b6595ed9fbcce86af693837a7c36" gracePeriod=10 Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.690925 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.210634252 podStartE2EDuration="27.690907543s" podCreationTimestamp="2026-01-29 15:45:56 +0000 UTC" firstStartedPulling="2026-01-29 15:46:12.561232025 +0000 UTC m=+1116.234086262" lastFinishedPulling="2026-01-29 15:46:19.041505316 +0000 UTC m=+1122.714359553" observedRunningTime="2026-01-29 15:46:23.688484494 +0000 UTC m=+1127.361338731" watchObservedRunningTime="2026-01-29 15:46:23.690907543 +0000 UTC m=+1127.363761780" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.742523 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-bw9wr" podStartSLOduration=13.419706567 podStartE2EDuration="21.742505084s" podCreationTimestamp="2026-01-29 15:46:02 +0000 UTC" firstStartedPulling="2026-01-29 15:46:12.881397432 +0000 UTC m=+1116.554251669" lastFinishedPulling="2026-01-29 15:46:21.204195949 +0000 UTC m=+1124.877050186" observedRunningTime="2026-01-29 15:46:23.73653448 +0000 UTC m=+1127.409388707" watchObservedRunningTime="2026-01-29 15:46:23.742505084 +0000 UTC m=+1127.415359341" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.821146 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="registry-server" probeResult="failure" output=< Jan 29 15:46:23 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:46:23 crc kubenswrapper[5008]: > Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.825711 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.410767842 podStartE2EDuration="25.825697792s" podCreationTimestamp="2026-01-29 15:45:58 +0000 UTC" firstStartedPulling="2026-01-29 15:46:12.99755086 +0000 UTC m=+1116.670405097" lastFinishedPulling="2026-01-29 15:46:22.4124808 +0000 UTC m=+1126.085335047" observedRunningTime="2026-01-29 15:46:23.822997898 +0000 UTC m=+1127.495852135" watchObservedRunningTime="2026-01-29 15:46:23.825697792 +0000 UTC m=+1127.498552029" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.690356 5008 generic.go:334] "Generic (PLEG): container finished" podID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerID="fce6b4dc39656ca4bbcce1eca3bc51906673b6595ed9fbcce86af693837a7c36" exitCode=0 Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.690442 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" event={"ID":"eaa396b6-206d-4e0f-8983-ee9ac16c910a","Type":"ContainerDied","Data":"fce6b4dc39656ca4bbcce1eca3bc51906673b6595ed9fbcce86af693837a7c36"} Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.693597 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5zwb" event={"ID":"fb07a603-1696-4378-8d99-382d5bc152da","Type":"ContainerStarted","Data":"ad93a2696fbfbb039cb97f5b3d24bc4c2c2b3502def665e7cb6e28ff3061ad4a"} Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.785429 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.862349 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc\") pod \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.862571 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config\") pod \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.862604 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt2jc\" (UniqueName: \"kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc\") pod \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.877171 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc" (OuterVolumeSpecName: "kube-api-access-gt2jc") pod "eaa396b6-206d-4e0f-8983-ee9ac16c910a" (UID: "eaa396b6-206d-4e0f-8983-ee9ac16c910a"). InnerVolumeSpecName "kube-api-access-gt2jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.897359 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eaa396b6-206d-4e0f-8983-ee9ac16c910a" (UID: "eaa396b6-206d-4e0f-8983-ee9ac16c910a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.905495 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config" (OuterVolumeSpecName: "config") pod "eaa396b6-206d-4e0f-8983-ee9ac16c910a" (UID: "eaa396b6-206d-4e0f-8983-ee9ac16c910a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.964510 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.964545 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt2jc\" (UniqueName: \"kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.964554 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.262666 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qkf4v"] Jan 29 15:46:25 crc kubenswrapper[5008]: E0129 15:46:25.263395 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="extract-utilities" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263415 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="extract-utilities" Jan 29 15:46:25 crc kubenswrapper[5008]: E0129 15:46:25.263433 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="registry-server" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263442 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="registry-server" Jan 29 15:46:25 crc kubenswrapper[5008]: E0129 15:46:25.263468 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="dnsmasq-dns" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263476 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="dnsmasq-dns" Jan 29 15:46:25 crc kubenswrapper[5008]: E0129 15:46:25.263494 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="init" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263501 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="init" Jan 29 15:46:25 crc kubenswrapper[5008]: E0129 15:46:25.263512 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="extract-content" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263521 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="extract-content" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263692 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="dnsmasq-dns" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263708 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="registry-server" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.264354 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.266731 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.288226 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qkf4v"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.372265 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.373120 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-combined-ca-bundle\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.373189 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv8zr\" (UniqueName: \"kubernetes.io/projected/90c13843-e314-4465-af68-367fc8d59731-kube-api-access-zv8zr\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.373297 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovs-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.373447 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c13843-e314-4465-af68-367fc8d59731-config\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.373502 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovn-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.406684 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.411191 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.412852 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.419490 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475572 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-combined-ca-bundle\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475628 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv8zr\" (UniqueName: \"kubernetes.io/projected/90c13843-e314-4465-af68-367fc8d59731-kube-api-access-zv8zr\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475668 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475693 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovs-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475711 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475728 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8r8s\" (UniqueName: \"kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475749 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475800 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c13843-e314-4465-af68-367fc8d59731-config\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475826 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovn-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475870 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.476638 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovs-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.476660 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovn-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.476908 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c13843-e314-4465-af68-367fc8d59731-config\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.480577 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-combined-ca-bundle\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.480587 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.510502 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv8zr\" (UniqueName: \"kubernetes.io/projected/90c13843-e314-4465-af68-367fc8d59731-kube-api-access-zv8zr\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.577439 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.577521 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.577553 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8r8s\" (UniqueName: \"kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.577582 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.578388 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.578562 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.579841 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.597634 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8r8s\" (UniqueName: \"kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.602053 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.707897 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5zwb" event={"ID":"fb07a603-1696-4378-8d99-382d5bc152da","Type":"ContainerStarted","Data":"7e0ade73e0587d08ed6b9c03fa1da934522b72d8922cde7b3b3e88a4f6b44af7"} Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.707957 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.707977 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.714611 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" event={"ID":"eaa396b6-206d-4e0f-8983-ee9ac16c910a","Type":"ContainerDied","Data":"309fd497280f26c9fefa297dd5016a654c256866d10ab9c20a829153df0b8be3"} Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.714656 5008 scope.go:117] "RemoveContainer" containerID="fce6b4dc39656ca4bbcce1eca3bc51906673b6595ed9fbcce86af693837a7c36" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.714836 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.736399 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.737037 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-k5zwb" podStartSLOduration=14.739579183 podStartE2EDuration="22.737019498s" podCreationTimestamp="2026-01-29 15:46:03 +0000 UTC" firstStartedPulling="2026-01-29 15:46:13.206819736 +0000 UTC m=+1116.879673973" lastFinishedPulling="2026-01-29 15:46:21.204260041 +0000 UTC m=+1124.877114288" observedRunningTime="2026-01-29 15:46:25.735937602 +0000 UTC m=+1129.408791839" watchObservedRunningTime="2026-01-29 15:46:25.737019498 +0000 UTC m=+1129.409873735" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.751864 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.758024 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.765072 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.782400 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.786569 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.790077 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.793189 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.881552 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.881642 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.881754 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhjbr\" (UniqueName: \"kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.881778 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.881924 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.900317 5008 scope.go:117] "RemoveContainer" containerID="ff4985a668c8ef886a12f2fd99e8abf04774b488c8fa43886cb72f524385e4cb" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.983701 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhjbr\" (UniqueName: \"kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.984088 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.984125 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.984175 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.984213 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.985207 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.985292 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.985406 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.985961 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.003557 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhjbr\" (UniqueName: \"kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.102543 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.449030 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:26 crc kubenswrapper[5008]: W0129 15:46:26.457898 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fd1d492_c335_4318_8eb9_bf8140f43b70.slice/crio-bc5c912ef7f1d4f332ceee6db68924660445e5eccec993a762814ffa92dc97e9 WatchSource:0}: Error finding container bc5c912ef7f1d4f332ceee6db68924660445e5eccec993a762814ffa92dc97e9: Status 404 returned error can't find the container with id bc5c912ef7f1d4f332ceee6db68924660445e5eccec993a762814ffa92dc97e9 Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.526236 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qkf4v"] Jan 29 15:46:26 crc kubenswrapper[5008]: W0129 15:46:26.531084 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90c13843_e314_4465_af68_367fc8d59731.slice/crio-d10764ec963d308cc55bc9cf86c88bce06d1b7ae5ee8b026ef43b023e89f1805 WatchSource:0}: Error finding container d10764ec963d308cc55bc9cf86c88bce06d1b7ae5ee8b026ef43b023e89f1805: Status 404 returned error can't find the container with id d10764ec963d308cc55bc9cf86c88bce06d1b7ae5ee8b026ef43b023e89f1805 Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.615202 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:26 crc kubenswrapper[5008]: W0129 15:46:26.644210 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod551951b1_6601_4b58_ab3c_aa03c962e65d.slice/crio-6830a4e592ccf7b5b08a72566d9d3f5dc6e7b0b1bdbcf42341ded46c73a34940 WatchSource:0}: Error finding container 6830a4e592ccf7b5b08a72566d9d3f5dc6e7b0b1bdbcf42341ded46c73a34940: Status 404 returned error can't find the container with id 6830a4e592ccf7b5b08a72566d9d3f5dc6e7b0b1bdbcf42341ded46c73a34940 Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.724106 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4d502938-9e22-4a6c-951e-b476cb87ee8f","Type":"ContainerStarted","Data":"5431f0774042099b394cbe05efcc819174663c681943e8592d97aeb438d5eae6"} Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.726897 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106","Type":"ContainerStarted","Data":"af60a5c9f2d5dee8ca8e1563f1892d8501f2e081ae5f8239ebe76fdf7298ba51"} Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.728814 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-znv2j" event={"ID":"551951b1-6601-4b58-ab3c-aa03c962e65d","Type":"ContainerStarted","Data":"6830a4e592ccf7b5b08a72566d9d3f5dc6e7b0b1bdbcf42341ded46c73a34940"} Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.731161 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" event={"ID":"6fd1d492-c335-4318-8eb9-bf8140f43b70","Type":"ContainerStarted","Data":"bc5c912ef7f1d4f332ceee6db68924660445e5eccec993a762814ffa92dc97e9"} Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.736160 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qkf4v" event={"ID":"90c13843-e314-4465-af68-367fc8d59731","Type":"ContainerStarted","Data":"d10764ec963d308cc55bc9cf86c88bce06d1b7ae5ee8b026ef43b023e89f1805"} Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.745677 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.774593186 podStartE2EDuration="24.745662336s" podCreationTimestamp="2026-01-29 15:46:02 +0000 UTC" firstStartedPulling="2026-01-29 15:46:15.021327463 +0000 UTC m=+1118.694181710" lastFinishedPulling="2026-01-29 15:46:25.992396623 +0000 UTC m=+1129.665250860" observedRunningTime="2026-01-29 15:46:26.743168885 +0000 UTC m=+1130.416023122" watchObservedRunningTime="2026-01-29 15:46:26.745662336 +0000 UTC m=+1130.418516573" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.771863 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.920807456 podStartE2EDuration="22.771841611s" podCreationTimestamp="2026-01-29 15:46:04 +0000 UTC" firstStartedPulling="2026-01-29 15:46:13.141571003 +0000 UTC m=+1116.814425240" lastFinishedPulling="2026-01-29 15:46:25.992605158 +0000 UTC m=+1129.665459395" observedRunningTime="2026-01-29 15:46:26.765136589 +0000 UTC m=+1130.437990846" watchObservedRunningTime="2026-01-29 15:46:26.771841611 +0000 UTC m=+1130.444695868" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.798113 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.849817 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.095603 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.340339 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" path="/var/lib/kubelet/pods/eaa396b6-206d-4e0f-8983-ee9ac16c910a/volumes" Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.746758 5008 generic.go:334] "Generic (PLEG): container finished" podID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerID="2b40c44564e987f20174f64ac60acdae94665df690bdf09a0b0f3a38b7da3092" exitCode=0 Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.746859 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-znv2j" event={"ID":"551951b1-6601-4b58-ab3c-aa03c962e65d","Type":"ContainerDied","Data":"2b40c44564e987f20174f64ac60acdae94665df690bdf09a0b0f3a38b7da3092"} Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.748736 5008 generic.go:334] "Generic (PLEG): container finished" podID="6fd1d492-c335-4318-8eb9-bf8140f43b70" containerID="4f407748b4b1147fb96c147c6104479ab174b2b946fa496bb5cba49a602159b3" exitCode=0 Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.748789 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" event={"ID":"6fd1d492-c335-4318-8eb9-bf8140f43b70","Type":"ContainerDied","Data":"4f407748b4b1147fb96c147c6104479ab174b2b946fa496bb5cba49a602159b3"} Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.750917 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qkf4v" event={"ID":"90c13843-e314-4465-af68-367fc8d59731","Type":"ContainerStarted","Data":"f7a25b072fa4182b25996d1c152c76441aa99f4d320197ae565130accb56e11d"} Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.751827 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.837809 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.859690 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qkf4v" podStartSLOduration=2.85967024 podStartE2EDuration="2.85967024s" podCreationTimestamp="2026-01-29 15:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:27.815311214 +0000 UTC m=+1131.488165481" watchObservedRunningTime="2026-01-29 15:46:27.85967024 +0000 UTC m=+1131.532524477" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.134007 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.179318 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.231341 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.371163 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config\") pod \"6fd1d492-c335-4318-8eb9-bf8140f43b70\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.371302 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb\") pod \"6fd1d492-c335-4318-8eb9-bf8140f43b70\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.371361 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8r8s\" (UniqueName: \"kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s\") pod \"6fd1d492-c335-4318-8eb9-bf8140f43b70\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.371409 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc\") pod \"6fd1d492-c335-4318-8eb9-bf8140f43b70\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.379193 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s" (OuterVolumeSpecName: "kube-api-access-r8r8s") pod "6fd1d492-c335-4318-8eb9-bf8140f43b70" (UID: "6fd1d492-c335-4318-8eb9-bf8140f43b70"). InnerVolumeSpecName "kube-api-access-r8r8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.392169 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fd1d492-c335-4318-8eb9-bf8140f43b70" (UID: "6fd1d492-c335-4318-8eb9-bf8140f43b70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.397403 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6fd1d492-c335-4318-8eb9-bf8140f43b70" (UID: "6fd1d492-c335-4318-8eb9-bf8140f43b70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.411495 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config" (OuterVolumeSpecName: "config") pod "6fd1d492-c335-4318-8eb9-bf8140f43b70" (UID: "6fd1d492-c335-4318-8eb9-bf8140f43b70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.473527 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.473568 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.473580 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8r8s\" (UniqueName: \"kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.473588 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.759078 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-znv2j" event={"ID":"551951b1-6601-4b58-ab3c-aa03c962e65d","Type":"ContainerStarted","Data":"38684768ef3bf132eafbfafd8a54383320bc339a0e2d483f6d09264bc7219316"} Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.760058 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.761561 5008 generic.go:334] "Generic (PLEG): container finished" podID="2c8d6871-1129-4597-8a1e-94006a17448a" containerID="5dfcdea1095ee2d3879ba921942b33575acdace6db8ae39b151b1c219157edc2" exitCode=0 Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.761616 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c8d6871-1129-4597-8a1e-94006a17448a","Type":"ContainerDied","Data":"5dfcdea1095ee2d3879ba921942b33575acdace6db8ae39b151b1c219157edc2"} Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.764286 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" event={"ID":"6fd1d492-c335-4318-8eb9-bf8140f43b70","Type":"ContainerDied","Data":"bc5c912ef7f1d4f332ceee6db68924660445e5eccec993a762814ffa92dc97e9"} Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.764326 5008 scope.go:117] "RemoveContainer" containerID="4f407748b4b1147fb96c147c6104479ab174b2b946fa496bb5cba49a602159b3" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.764426 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.769461 5008 generic.go:334] "Generic (PLEG): container finished" podID="a2958b99-a5fe-447a-93cc-64bade998854" containerID="4fb6ed72bca123054fb804f9974ec317326298fe7e9c9208c5b3b6c813fe0609" exitCode=0 Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.770055 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2958b99-a5fe-447a-93cc-64bade998854","Type":"ContainerDied","Data":"4fb6ed72bca123054fb804f9974ec317326298fe7e9c9208c5b3b6c813fe0609"} Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.771302 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.803881 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-znv2j" podStartSLOduration=3.803865975 podStartE2EDuration="3.803865975s" podCreationTimestamp="2026-01-29 15:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:28.802373439 +0000 UTC m=+1132.475227686" watchObservedRunningTime="2026-01-29 15:46:28.803865975 +0000 UTC m=+1132.476720212" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.923282 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.949215 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.990468 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:46:28 crc kubenswrapper[5008]: E0129 15:46:28.997106 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd1d492-c335-4318-8eb9-bf8140f43b70" containerName="init" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.997163 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd1d492-c335-4318-8eb9-bf8140f43b70" containerName="init" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.997516 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd1d492-c335-4318-8eb9-bf8140f43b70" containerName="init" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.013172 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.064927 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.065889 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.069725 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.080228 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.083725 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.083776 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.083829 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsqq2\" (UniqueName: \"kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.083867 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.083886 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.188062 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.188145 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.188194 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsqq2\" (UniqueName: \"kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.188261 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.188286 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.189182 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.189853 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.190885 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.191252 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.222551 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsqq2\" (UniqueName: \"kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.231037 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.232963 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.240383 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.240552 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.240701 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zn5dg" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.240837 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.268089 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.336925 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd1d492-c335-4318-8eb9-bf8140f43b70" path="/var/lib/kubelet/pods/6fd1d492-c335-4318-8eb9-bf8140f43b70/volumes" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.368765 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.390750 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.390817 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.390896 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75cqs\" (UniqueName: \"kubernetes.io/projected/f251affb-8e6d-445d-996c-da5e3fc29de8-kube-api-access-75cqs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.390926 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-scripts\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.391007 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.391034 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.391062 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-config\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.492833 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.492880 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.492953 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75cqs\" (UniqueName: \"kubernetes.io/projected/f251affb-8e6d-445d-996c-da5e3fc29de8-kube-api-access-75cqs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.492980 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-scripts\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.493049 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.493072 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.493096 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-config\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.493558 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.494344 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-config\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.494381 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-scripts\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.498259 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.515236 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.517466 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.523525 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75cqs\" (UniqueName: \"kubernetes.io/projected/f251affb-8e6d-445d-996c-da5e3fc29de8-kube-api-access-75cqs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.550913 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.675340 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.783700 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c8d6871-1129-4597-8a1e-94006a17448a","Type":"ContainerStarted","Data":"00ad3225217a2d81792204c71772618c8cad067cc008f067de2957088e135a12"} Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.791569 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" event={"ID":"536998c7-ad3f-4b4c-ad9e-342343eded97","Type":"ContainerStarted","Data":"e0537e06f45058060e30f1ea912f4b791f0f50a83a241274268db34f9a3ef7fc"} Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.804050 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.316264828 podStartE2EDuration="34.804034587s" podCreationTimestamp="2026-01-29 15:45:55 +0000 UTC" firstStartedPulling="2026-01-29 15:46:12.716556593 +0000 UTC m=+1116.389410830" lastFinishedPulling="2026-01-29 15:46:21.204326352 +0000 UTC m=+1124.877180589" observedRunningTime="2026-01-29 15:46:29.803483014 +0000 UTC m=+1133.476337261" watchObservedRunningTime="2026-01-29 15:46:29.804034587 +0000 UTC m=+1133.476888824" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.815544 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2958b99-a5fe-447a-93cc-64bade998854","Type":"ContainerStarted","Data":"dda80631261104253e7f9951ab5c6feb34248b19f89fcd3f70d7ff4a902f88e3"} Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.857621 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.449704975 podStartE2EDuration="35.857602697s" podCreationTimestamp="2026-01-29 15:45:54 +0000 UTC" firstStartedPulling="2026-01-29 15:46:12.946273535 +0000 UTC m=+1116.619127772" lastFinishedPulling="2026-01-29 15:46:21.354171237 +0000 UTC m=+1125.027025494" observedRunningTime="2026-01-29 15:46:29.851486558 +0000 UTC m=+1133.524340805" watchObservedRunningTime="2026-01-29 15:46:29.857602697 +0000 UTC m=+1133.530456934" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.080514 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.116376 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.119318 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.120511 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dmwfl" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.120791 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.120928 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.130372 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.140442 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213684 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213770 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-cache\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213825 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213868 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8nx\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-kube-api-access-6h8nx\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213909 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-lock\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213926 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.315937 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.316040 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-cache\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.316101 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.316152 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8nx\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-kube-api-access-6h8nx\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.316195 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-lock\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.316215 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.316707 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.317115 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.317254 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:46:30.817235377 +0000 UTC m=+1134.490089614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.317090 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-cache\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.317317 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-lock\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.317777 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.322818 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.334607 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8nx\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-kube-api-access-6h8nx\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.338951 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.614672 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-phmts"] Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.615886 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.618146 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.621137 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.627599 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-phmts"] Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.628273 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723382 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723478 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723534 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723565 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723601 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxr66\" (UniqueName: \"kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723637 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723662 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826762 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826847 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826875 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826929 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826953 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826986 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxr66\" (UniqueName: \"kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.827019 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.827050 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.828729 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.829043 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.829874 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.830302 5008 generic.go:334] "Generic (PLEG): container finished" podID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerID="01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94" exitCode=0 Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.830388 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" event={"ID":"536998c7-ad3f-4b4c-ad9e-342343eded97","Type":"ContainerDied","Data":"01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94"} Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.830507 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.830522 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.830563 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:46:31.830548158 +0000 UTC m=+1135.503402485 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.840406 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.845251 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.853943 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f251affb-8e6d-445d-996c-da5e3fc29de8","Type":"ContainerStarted","Data":"83fbf9241c85af5076899607dbf81b72b96fef0c2ab74ad22bb0bb59dd9ae067"} Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.854252 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-znv2j" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="dnsmasq-dns" containerID="cri-o://38684768ef3bf132eafbfafd8a54383320bc339a0e2d483f6d09264bc7219316" gracePeriod=10 Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.855479 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.866012 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxr66\" (UniqueName: \"kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.979605 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:31 crc kubenswrapper[5008]: I0129 15:46:31.426370 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-phmts"] Jan 29 15:46:31 crc kubenswrapper[5008]: I0129 15:46:31.843732 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:31 crc kubenswrapper[5008]: E0129 15:46:31.843977 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:31 crc kubenswrapper[5008]: E0129 15:46:31.844006 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:31 crc kubenswrapper[5008]: E0129 15:46:31.844074 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:46:33.844052354 +0000 UTC m=+1137.516906591 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:31 crc kubenswrapper[5008]: I0129 15:46:31.863489 5008 generic.go:334] "Generic (PLEG): container finished" podID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerID="38684768ef3bf132eafbfafd8a54383320bc339a0e2d483f6d09264bc7219316" exitCode=0 Jan 29 15:46:31 crc kubenswrapper[5008]: I0129 15:46:31.863565 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-znv2j" event={"ID":"551951b1-6601-4b58-ab3c-aa03c962e65d","Type":"ContainerDied","Data":"38684768ef3bf132eafbfafd8a54383320bc339a0e2d483f6d09264bc7219316"} Jan 29 15:46:31 crc kubenswrapper[5008]: I0129 15:46:31.864422 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-phmts" event={"ID":"5b273a50-b2db-40d5-b4b4-6494206c606d","Type":"ContainerStarted","Data":"a2a98c18f51d01224109abefa4392158329836c967e1403808990bd7b1c85a20"} Jan 29 15:46:32 crc kubenswrapper[5008]: I0129 15:46:32.736770 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:32 crc kubenswrapper[5008]: I0129 15:46:32.786109 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:32 crc kubenswrapper[5008]: I0129 15:46:32.973843 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:46:33 crc kubenswrapper[5008]: I0129 15:46:33.875529 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:33 crc kubenswrapper[5008]: E0129 15:46:33.875840 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:33 crc kubenswrapper[5008]: E0129 15:46:33.876079 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:33 crc kubenswrapper[5008]: E0129 15:46:33.876161 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:46:37.87613559 +0000 UTC m=+1141.548989867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:33 crc kubenswrapper[5008]: I0129 15:46:33.879346 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="registry-server" containerID="cri-o://fe84ae8c70bf02c4e800e24fb21b8ef0fd34cc6225eaec2832f3c97a133d05fb" gracePeriod=2 Jan 29 15:46:35 crc kubenswrapper[5008]: I0129 15:46:35.430350 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 15:46:35 crc kubenswrapper[5008]: I0129 15:46:35.430737 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.110463 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-znv2j" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.748578 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.748639 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.908392 5008 generic.go:334] "Generic (PLEG): container finished" podID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerID="fe84ae8c70bf02c4e800e24fb21b8ef0fd34cc6225eaec2832f3c97a133d05fb" exitCode=0 Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.908500 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerDied","Data":"fe84ae8c70bf02c4e800e24fb21b8ef0fd34cc6225eaec2832f3c97a133d05fb"} Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.910814 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" event={"ID":"536998c7-ad3f-4b4c-ad9e-342343eded97","Type":"ContainerStarted","Data":"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831"} Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.911041 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.950384 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" podStartSLOduration=8.950361325 podStartE2EDuration="8.950361325s" podCreationTimestamp="2026-01-29 15:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:36.939692047 +0000 UTC m=+1140.612546314" watchObservedRunningTime="2026-01-29 15:46:36.950361325 +0000 UTC m=+1140.623215572" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.066743 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.138339 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhjbr\" (UniqueName: \"kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr\") pod \"551951b1-6601-4b58-ab3c-aa03c962e65d\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.138399 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config\") pod \"551951b1-6601-4b58-ab3c-aa03c962e65d\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.138494 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc\") pod \"551951b1-6601-4b58-ab3c-aa03c962e65d\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.138538 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb\") pod \"551951b1-6601-4b58-ab3c-aa03c962e65d\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.138574 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb\") pod \"551951b1-6601-4b58-ab3c-aa03c962e65d\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.185557 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr" (OuterVolumeSpecName: "kube-api-access-qhjbr") pod "551951b1-6601-4b58-ab3c-aa03c962e65d" (UID: "551951b1-6601-4b58-ab3c-aa03c962e65d"). InnerVolumeSpecName "kube-api-access-qhjbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.199661 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "551951b1-6601-4b58-ab3c-aa03c962e65d" (UID: "551951b1-6601-4b58-ab3c-aa03c962e65d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.204809 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "551951b1-6601-4b58-ab3c-aa03c962e65d" (UID: "551951b1-6601-4b58-ab3c-aa03c962e65d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.206039 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "551951b1-6601-4b58-ab3c-aa03c962e65d" (UID: "551951b1-6601-4b58-ab3c-aa03c962e65d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.221094 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config" (OuterVolumeSpecName: "config") pod "551951b1-6601-4b58-ab3c-aa03c962e65d" (UID: "551951b1-6601-4b58-ab3c-aa03c962e65d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.240076 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhjbr\" (UniqueName: \"kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.240112 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.240123 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.240131 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.240140 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.255153 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.343307 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content\") pod \"decefe5c-189e-43f8-88b2-f93a00567c3e\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.343389 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkwsn\" (UniqueName: \"kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn\") pod \"decefe5c-189e-43f8-88b2-f93a00567c3e\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.343489 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities\") pod \"decefe5c-189e-43f8-88b2-f93a00567c3e\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.347510 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities" (OuterVolumeSpecName: "utilities") pod "decefe5c-189e-43f8-88b2-f93a00567c3e" (UID: "decefe5c-189e-43f8-88b2-f93a00567c3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.349719 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn" (OuterVolumeSpecName: "kube-api-access-gkwsn") pod "decefe5c-189e-43f8-88b2-f93a00567c3e" (UID: "decefe5c-189e-43f8-88b2-f93a00567c3e"). InnerVolumeSpecName "kube-api-access-gkwsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.410877 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "decefe5c-189e-43f8-88b2-f93a00567c3e" (UID: "decefe5c-189e-43f8-88b2-f93a00567c3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.445949 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.445989 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkwsn\" (UniqueName: \"kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.446001 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.923496 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerDied","Data":"1e9043307f7a755489d3a239db58010b75203626c362242971f41c104845eeea"} Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.923550 5008 scope.go:117] "RemoveContainer" containerID="fe84ae8c70bf02c4e800e24fb21b8ef0fd34cc6225eaec2832f3c97a133d05fb" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.923647 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.927173 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-znv2j" event={"ID":"551951b1-6601-4b58-ab3c-aa03c962e65d","Type":"ContainerDied","Data":"6830a4e592ccf7b5b08a72566d9d3f5dc6e7b0b1bdbcf42341ded46c73a34940"} Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.927201 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.954078 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:37 crc kubenswrapper[5008]: E0129 15:46:37.956494 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:37 crc kubenswrapper[5008]: E0129 15:46:37.956515 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:37 crc kubenswrapper[5008]: E0129 15:46:37.956561 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:46:45.956543364 +0000 UTC m=+1149.629397601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.970994 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.982517 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.991909 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.995382 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.144346 5008 scope.go:117] "RemoveContainer" containerID="e32fe63a0f361be2992d303fb8560c37887275468835e55857ba8a6b44bc5268" Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.203717 5008 scope.go:117] "RemoveContainer" containerID="11de983cd2749bba71f06017a27d73e928c76c7f26d9aaaadf0259656de48de2" Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.237217 5008 scope.go:117] "RemoveContainer" containerID="38684768ef3bf132eafbfafd8a54383320bc339a0e2d483f6d09264bc7219316" Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.504017 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.579454 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.944403 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f251affb-8e6d-445d-996c-da5e3fc29de8","Type":"ContainerStarted","Data":"7aab69f3b27570d6bdb4523bdea817bf898ffe9d0a38ea095cae12c9cdcf973f"} Jan 29 15:46:39 crc kubenswrapper[5008]: I0129 15:46:39.335445 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" path="/var/lib/kubelet/pods/551951b1-6601-4b58-ab3c-aa03c962e65d/volumes" Jan 29 15:46:39 crc kubenswrapper[5008]: I0129 15:46:39.336557 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" path="/var/lib/kubelet/pods/decefe5c-189e-43f8-88b2-f93a00567c3e/volumes" Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.204752 5008 scope.go:117] "RemoveContainer" containerID="2b40c44564e987f20174f64ac60acdae94665df690bdf09a0b0f3a38b7da3092" Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.869660 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.961279 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-phmts" event={"ID":"5b273a50-b2db-40d5-b4b4-6494206c606d","Type":"ContainerStarted","Data":"bda0b4b24ad7358124acc7096a07129f2529fe34f4356b7cc8add641046f3880"} Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.963981 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f251affb-8e6d-445d-996c-da5e3fc29de8","Type":"ContainerStarted","Data":"809c8215ecf172cb8d1fff367b10f9cc603ad0233c0f9adb12149813077f000d"} Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.964274 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.978635 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 15:46:41 crc kubenswrapper[5008]: I0129 15:46:41.001424 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-phmts" podStartSLOduration=2.17650442 podStartE2EDuration="11.001398257s" podCreationTimestamp="2026-01-29 15:46:30 +0000 UTC" firstStartedPulling="2026-01-29 15:46:31.431457956 +0000 UTC m=+1135.104312193" lastFinishedPulling="2026-01-29 15:46:40.256351793 +0000 UTC m=+1143.929206030" observedRunningTime="2026-01-29 15:46:40.982871487 +0000 UTC m=+1144.655725764" watchObservedRunningTime="2026-01-29 15:46:41.001398257 +0000 UTC m=+1144.674252494" Jan 29 15:46:41 crc kubenswrapper[5008]: I0129 15:46:41.044424 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.930948742 podStartE2EDuration="12.04440618s" podCreationTimestamp="2026-01-29 15:46:29 +0000 UTC" firstStartedPulling="2026-01-29 15:46:30.098587183 +0000 UTC m=+1133.771441440" lastFinishedPulling="2026-01-29 15:46:38.212044641 +0000 UTC m=+1141.884898878" observedRunningTime="2026-01-29 15:46:41.021884473 +0000 UTC m=+1144.694738720" watchObservedRunningTime="2026-01-29 15:46:41.04440618 +0000 UTC m=+1144.717260427" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.149547 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d79ml"] Jan 29 15:46:44 crc kubenswrapper[5008]: E0129 15:46:44.150330 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="registry-server" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.150352 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="registry-server" Jan 29 15:46:44 crc kubenswrapper[5008]: E0129 15:46:44.150373 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="dnsmasq-dns" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.150383 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="dnsmasq-dns" Jan 29 15:46:44 crc kubenswrapper[5008]: E0129 15:46:44.150397 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="init" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.150407 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="init" Jan 29 15:46:44 crc kubenswrapper[5008]: E0129 15:46:44.150434 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="extract-content" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.151020 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="extract-content" Jan 29 15:46:44 crc kubenswrapper[5008]: E0129 15:46:44.151044 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="extract-utilities" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.151054 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="extract-utilities" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.151638 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="registry-server" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.151707 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="dnsmasq-dns" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.153098 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.157014 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.165021 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d79ml"] Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.283250 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnk2c\" (UniqueName: \"kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.283693 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.371015 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.386056 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnk2c\" (UniqueName: \"kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.386321 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.387509 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.420333 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnk2c\" (UniqueName: \"kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.444643 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.444914 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="dnsmasq-dns" containerID="cri-o://41e80ea40d300659d460b8dae3a7e24635694097a722b56e704158aae123525e" gracePeriod=10 Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.478187 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.020880 5008 generic.go:334] "Generic (PLEG): container finished" podID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerID="41e80ea40d300659d460b8dae3a7e24635694097a722b56e704158aae123525e" exitCode=0 Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.020970 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" event={"ID":"d528ee94-b499-4f20-8603-6dcc9e8b0361","Type":"ContainerDied","Data":"41e80ea40d300659d460b8dae3a7e24635694097a722b56e704158aae123525e"} Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.034270 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d79ml"] Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.120493 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.309713 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config\") pod \"d528ee94-b499-4f20-8603-6dcc9e8b0361\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.309881 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75fqk\" (UniqueName: \"kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk\") pod \"d528ee94-b499-4f20-8603-6dcc9e8b0361\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.310041 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc\") pod \"d528ee94-b499-4f20-8603-6dcc9e8b0361\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.316013 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk" (OuterVolumeSpecName: "kube-api-access-75fqk") pod "d528ee94-b499-4f20-8603-6dcc9e8b0361" (UID: "d528ee94-b499-4f20-8603-6dcc9e8b0361"). InnerVolumeSpecName "kube-api-access-75fqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.353690 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config" (OuterVolumeSpecName: "config") pod "d528ee94-b499-4f20-8603-6dcc9e8b0361" (UID: "d528ee94-b499-4f20-8603-6dcc9e8b0361"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.353860 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d528ee94-b499-4f20-8603-6dcc9e8b0361" (UID: "d528ee94-b499-4f20-8603-6dcc9e8b0361"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.412622 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.412664 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.412676 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75fqk\" (UniqueName: \"kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.021835 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:46 crc kubenswrapper[5008]: E0129 15:46:46.022356 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:46 crc kubenswrapper[5008]: E0129 15:46:46.022373 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:46 crc kubenswrapper[5008]: E0129 15:46:46.022422 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:47:02.022404248 +0000 UTC m=+1165.695258485 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.030907 5008 generic.go:334] "Generic (PLEG): container finished" podID="8c8683a3-18f6-4242-9991-b542aed9143b" containerID="a8bec1298ff14291e2bcc81bb72e60423454e3549e3617dfc368a5ff2649831f" exitCode=0 Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.030969 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8c8683a3-18f6-4242-9991-b542aed9143b","Type":"ContainerDied","Data":"a8bec1298ff14291e2bcc81bb72e60423454e3549e3617dfc368a5ff2649831f"} Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.035841 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" event={"ID":"d528ee94-b499-4f20-8603-6dcc9e8b0361","Type":"ContainerDied","Data":"7e40b85878fc9eb94adb0dc672f4b4d3fd0475b78dd43bc83dd4dd513c313465"} Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.035899 5008 scope.go:117] "RemoveContainer" containerID="41e80ea40d300659d460b8dae3a7e24635694097a722b56e704158aae123525e" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.035955 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.049018 5008 generic.go:334] "Generic (PLEG): container finished" podID="4dcd0990-beb1-445a-b387-b2b78c1a39d2" containerID="2c6fa5d16085f47a1816e6e7356d1268ade8fe801f24fc04ea91e56e48e6806c" exitCode=0 Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.049093 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dcd0990-beb1-445a-b387-b2b78c1a39d2","Type":"ContainerDied","Data":"2c6fa5d16085f47a1816e6e7356d1268ade8fe801f24fc04ea91e56e48e6806c"} Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.051866 5008 generic.go:334] "Generic (PLEG): container finished" podID="907129fe-50cb-47ef-bbf6-db42cd2ad1ae" containerID="e93e17f1bada8f9ceb5d734c0b57f087df79c0ad461fa0d4048a7875532ded1d" exitCode=0 Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.051902 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d79ml" event={"ID":"907129fe-50cb-47ef-bbf6-db42cd2ad1ae","Type":"ContainerDied","Data":"e93e17f1bada8f9ceb5d734c0b57f087df79c0ad461fa0d4048a7875532ded1d"} Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.051925 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d79ml" event={"ID":"907129fe-50cb-47ef-bbf6-db42cd2ad1ae","Type":"ContainerStarted","Data":"ff2c646e70d92dcf4358d827ab57d652f752745f3e7a9b83004df897a827b555"} Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.199831 5008 scope.go:117] "RemoveContainer" containerID="074d5cb2df57c15195252921a34c3156f30decbbef34cf2601f7fc1b8f4751b1" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.234269 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.240489 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.700373 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-pggzk"] Jan 29 15:46:46 crc kubenswrapper[5008]: E0129 15:46:46.701996 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="dnsmasq-dns" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.702101 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="dnsmasq-dns" Jan 29 15:46:46 crc kubenswrapper[5008]: E0129 15:46:46.702190 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="init" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.702254 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="init" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.702529 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="dnsmasq-dns" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.703234 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.707357 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pggzk"] Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.811343 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e4e6-account-create-update-6vxmr"] Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.815608 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.822434 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.852040 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.852642 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzbff\" (UniqueName: \"kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.866102 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e4e6-account-create-update-6vxmr"] Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.954388 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzbff\" (UniqueName: \"kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.954527 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.954563 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.954588 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpwj8\" (UniqueName: \"kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.955322 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.973909 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzbff\" (UniqueName: \"kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.004880 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8tpqs"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.006116 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.018333 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8tpqs"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.027472 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.055811 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.056193 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpwj8\" (UniqueName: \"kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.056765 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.072307 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8c8683a3-18f6-4242-9991-b542aed9143b","Type":"ContainerStarted","Data":"17a9d85c4e86267ed17f122162314c4abf33109c5d7f30dc6ebf14f80d93172f"} Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.072563 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.077181 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dcd0990-beb1-445a-b387-b2b78c1a39d2","Type":"ContainerStarted","Data":"9b31c687c333d16fa1b4aaf245a078f04a0f3ed0c06a452ddad2c14ecb517683"} Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.077720 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.078291 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpwj8\" (UniqueName: \"kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.116962 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.241720357 podStartE2EDuration="55.116941979s" podCreationTimestamp="2026-01-29 15:45:52 +0000 UTC" firstStartedPulling="2026-01-29 15:45:59.397241888 +0000 UTC m=+1103.070096135" lastFinishedPulling="2026-01-29 15:46:12.27246352 +0000 UTC m=+1115.945317757" observedRunningTime="2026-01-29 15:46:47.103527545 +0000 UTC m=+1150.776381782" watchObservedRunningTime="2026-01-29 15:46:47.116941979 +0000 UTC m=+1150.789796216" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.118829 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4a04-account-create-update-2cfml"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.119863 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.123174 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.140948 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4a04-account-create-update-2cfml"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.143268 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.385315062 podStartE2EDuration="55.143247948s" podCreationTimestamp="2026-01-29 15:45:52 +0000 UTC" firstStartedPulling="2026-01-29 15:45:58.582046912 +0000 UTC m=+1102.254901149" lastFinishedPulling="2026-01-29 15:46:12.339979798 +0000 UTC m=+1116.012834035" observedRunningTime="2026-01-29 15:46:47.132764394 +0000 UTC m=+1150.805618641" watchObservedRunningTime="2026-01-29 15:46:47.143247948 +0000 UTC m=+1150.816102185" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.157755 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.158128 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chrb2\" (UniqueName: \"kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.159447 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.261558 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chrb2\" (UniqueName: \"kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.261922 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfz4q\" (UniqueName: \"kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.262029 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.262086 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.262933 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.280562 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chrb2\" (UniqueName: \"kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.310642 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rvpz6"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.311720 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.339762 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" path="/var/lib/kubelet/pods/d528ee94-b499-4f20-8603-6dcc9e8b0361/volumes" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.340713 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rvpz6"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.340898 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.348210 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0e02-account-create-update-7n7jw"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.349229 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.351094 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.356674 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0e02-account-create-update-7n7jw"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.362726 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.362811 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfz4q\" (UniqueName: \"kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.365382 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.393532 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfz4q\" (UniqueName: \"kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.441361 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.469030 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.469094 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42nvh\" (UniqueName: \"kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.469117 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.469146 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwbdf\" (UniqueName: \"kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.537296 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pggzk"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.570823 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.570879 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42nvh\" (UniqueName: \"kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.570911 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.570930 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwbdf\" (UniqueName: \"kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.571663 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.571901 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.589550 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42nvh\" (UniqueName: \"kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.598522 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwbdf\" (UniqueName: \"kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.629755 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.663136 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.764960 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.765337 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e4e6-account-create-update-6vxmr"] Jan 29 15:46:47 crc kubenswrapper[5008]: W0129 15:46:47.786583 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30bc21a6_d1eb_4200_add0_523a33ffb2ff.slice/crio-741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9 WatchSource:0}: Error finding container 741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9: Status 404 returned error can't find the container with id 741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9 Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.815355 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4a04-account-create-update-2cfml"] Jan 29 15:46:47 crc kubenswrapper[5008]: W0129 15:46:47.828540 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fd141cd_e623_4692_892c_cf683275d378.slice/crio-b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff WatchSource:0}: Error finding container b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff: Status 404 returned error can't find the container with id b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.859530 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8tpqs"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.875403 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnk2c\" (UniqueName: \"kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c\") pod \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.875433 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts\") pod \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.876320 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "907129fe-50cb-47ef-bbf6-db42cd2ad1ae" (UID: "907129fe-50cb-47ef-bbf6-db42cd2ad1ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.882466 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c" (OuterVolumeSpecName: "kube-api-access-tnk2c") pod "907129fe-50cb-47ef-bbf6-db42cd2ad1ae" (UID: "907129fe-50cb-47ef-bbf6-db42cd2ad1ae"). InnerVolumeSpecName "kube-api-access-tnk2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:47 crc kubenswrapper[5008]: W0129 15:46:47.911274 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08da0630_8fe2_4a33_be0c_d81bba67c32c.slice/crio-6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877 WatchSource:0}: Error finding container 6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877: Status 404 returned error can't find the container with id 6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877 Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.976977 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnk2c\" (UniqueName: \"kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.977008 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.085145 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4a04-account-create-update-2cfml" event={"ID":"6fd141cd-e623-4692-892c-cf683275d378","Type":"ContainerStarted","Data":"08622f8ad03658b22a0476180ef40d122a3ce215734ba57beccde8e385c5d87a"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.085202 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4a04-account-create-update-2cfml" event={"ID":"6fd141cd-e623-4692-892c-cf683275d378","Type":"ContainerStarted","Data":"b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.086510 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d79ml" event={"ID":"907129fe-50cb-47ef-bbf6-db42cd2ad1ae","Type":"ContainerDied","Data":"ff2c646e70d92dcf4358d827ab57d652f752745f3e7a9b83004df897a827b555"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.086544 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff2c646e70d92dcf4358d827ab57d652f752745f3e7a9b83004df897a827b555" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.086693 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.087876 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8tpqs" event={"ID":"08da0630-8fe2-4a33-be0c-d81bba67c32c","Type":"ContainerStarted","Data":"c12146b73a51a5482b71661513ea3874dfe91fc50f839323c14bf1dbe55d4888"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.087911 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8tpqs" event={"ID":"08da0630-8fe2-4a33-be0c-d81bba67c32c","Type":"ContainerStarted","Data":"6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.089688 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e4e6-account-create-update-6vxmr" event={"ID":"30bc21a6-d1eb-4200-add0-523a33ffb2ff","Type":"ContainerStarted","Data":"9c021d2423056bd1e8f0c03523a2b976398e77dc14de7fa3b22ff99a7e7bf44a"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.089722 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e4e6-account-create-update-6vxmr" event={"ID":"30bc21a6-d1eb-4200-add0-523a33ffb2ff","Type":"ContainerStarted","Data":"741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.091325 5008 generic.go:334] "Generic (PLEG): container finished" podID="5b273a50-b2db-40d5-b4b4-6494206c606d" containerID="bda0b4b24ad7358124acc7096a07129f2529fe34f4356b7cc8add641046f3880" exitCode=0 Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.091407 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-phmts" event={"ID":"5b273a50-b2db-40d5-b4b4-6494206c606d","Type":"ContainerDied","Data":"bda0b4b24ad7358124acc7096a07129f2529fe34f4356b7cc8add641046f3880"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.093726 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pggzk" event={"ID":"232739d0-09f9-4843-8c9f-fc19bc53763f","Type":"ContainerStarted","Data":"a31808be1fa3bc4b89dfda7f79836da13bf6f5c2671c33471c5061bfc1edc1ea"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.093763 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pggzk" event={"ID":"232739d0-09f9-4843-8c9f-fc19bc53763f","Type":"ContainerStarted","Data":"b4483fe57166afcb40a3f3934546faf4535fed5a2e09681d32d851d4837ee7f9"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.104071 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-4a04-account-create-update-2cfml" podStartSLOduration=1.1040534850000001 podStartE2EDuration="1.104053485s" podCreationTimestamp="2026-01-29 15:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:48.102044967 +0000 UTC m=+1151.774899204" watchObservedRunningTime="2026-01-29 15:46:48.104053485 +0000 UTC m=+1151.776907722" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.149839 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-8tpqs" podStartSLOduration=2.149811185 podStartE2EDuration="2.149811185s" podCreationTimestamp="2026-01-29 15:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:48.129843341 +0000 UTC m=+1151.802697578" watchObservedRunningTime="2026-01-29 15:46:48.149811185 +0000 UTC m=+1151.822665442" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.172999 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-e4e6-account-create-update-6vxmr" podStartSLOduration=2.172979858 podStartE2EDuration="2.172979858s" podCreationTimestamp="2026-01-29 15:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:48.162659637 +0000 UTC m=+1151.835513874" watchObservedRunningTime="2026-01-29 15:46:48.172979858 +0000 UTC m=+1151.845834095" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.221831 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rvpz6"] Jan 29 15:46:48 crc kubenswrapper[5008]: W0129 15:46:48.251231 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod328d3758_78bd_4a08_b91f_f2f4c9b8b645.slice/crio-34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683 WatchSource:0}: Error finding container 34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683: Status 404 returned error can't find the container with id 34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683 Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.272607 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-pggzk" podStartSLOduration=2.272589534 podStartE2EDuration="2.272589534s" podCreationTimestamp="2026-01-29 15:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:48.214232198 +0000 UTC m=+1151.887086435" watchObservedRunningTime="2026-01-29 15:46:48.272589534 +0000 UTC m=+1151.945443771" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.323629 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0e02-account-create-update-7n7jw"] Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.106227 5008 generic.go:334] "Generic (PLEG): container finished" podID="30bc21a6-d1eb-4200-add0-523a33ffb2ff" containerID="9c021d2423056bd1e8f0c03523a2b976398e77dc14de7fa3b22ff99a7e7bf44a" exitCode=0 Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.106600 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e4e6-account-create-update-6vxmr" event={"ID":"30bc21a6-d1eb-4200-add0-523a33ffb2ff","Type":"ContainerDied","Data":"9c021d2423056bd1e8f0c03523a2b976398e77dc14de7fa3b22ff99a7e7bf44a"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.110541 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rvpz6" event={"ID":"207579aa-feff-4069-8fcb-02c5b9cd107f","Type":"ContainerStarted","Data":"d9b41e67155f529dbd273cfba785076257b2721a371f6a0e62d1c4355eb9512a"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.110576 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rvpz6" event={"ID":"207579aa-feff-4069-8fcb-02c5b9cd107f","Type":"ContainerStarted","Data":"9fe8adf0f447ec158390678253e2d815451e2613c777de440bc0dbb02a7556a8"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.114511 5008 generic.go:334] "Generic (PLEG): container finished" podID="6fd141cd-e623-4692-892c-cf683275d378" containerID="08622f8ad03658b22a0476180ef40d122a3ce215734ba57beccde8e385c5d87a" exitCode=0 Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.114681 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4a04-account-create-update-2cfml" event={"ID":"6fd141cd-e623-4692-892c-cf683275d378","Type":"ContainerDied","Data":"08622f8ad03658b22a0476180ef40d122a3ce215734ba57beccde8e385c5d87a"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.116688 5008 generic.go:334] "Generic (PLEG): container finished" podID="232739d0-09f9-4843-8c9f-fc19bc53763f" containerID="a31808be1fa3bc4b89dfda7f79836da13bf6f5c2671c33471c5061bfc1edc1ea" exitCode=0 Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.116737 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pggzk" event={"ID":"232739d0-09f9-4843-8c9f-fc19bc53763f","Type":"ContainerDied","Data":"a31808be1fa3bc4b89dfda7f79836da13bf6f5c2671c33471c5061bfc1edc1ea"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.118735 5008 generic.go:334] "Generic (PLEG): container finished" podID="08da0630-8fe2-4a33-be0c-d81bba67c32c" containerID="c12146b73a51a5482b71661513ea3874dfe91fc50f839323c14bf1dbe55d4888" exitCode=0 Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.118837 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8tpqs" event={"ID":"08da0630-8fe2-4a33-be0c-d81bba67c32c","Type":"ContainerDied","Data":"c12146b73a51a5482b71661513ea3874dfe91fc50f839323c14bf1dbe55d4888"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.126860 5008 generic.go:334] "Generic (PLEG): container finished" podID="328d3758-78bd-4a08-b91f-f2f4c9b8b645" containerID="d694dd74760c7fb5bcb25c24900b008d41d6e4127c92f70bb60fd3e6fc52c215" exitCode=0 Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.126933 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0e02-account-create-update-7n7jw" event={"ID":"328d3758-78bd-4a08-b91f-f2f4c9b8b645","Type":"ContainerDied","Data":"d694dd74760c7fb5bcb25c24900b008d41d6e4127c92f70bb60fd3e6fc52c215"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.126966 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0e02-account-create-update-7n7jw" event={"ID":"328d3758-78bd-4a08-b91f-f2f4c9b8b645","Type":"ContainerStarted","Data":"34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.196096 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-rvpz6" podStartSLOduration=2.196079766 podStartE2EDuration="2.196079766s" podCreationTimestamp="2026-01-29 15:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:49.191515565 +0000 UTC m=+1152.864369792" watchObservedRunningTime="2026-01-29 15:46:49.196079766 +0000 UTC m=+1152.868934013" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.461894 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.604839 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605123 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605172 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605205 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605270 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxr66\" (UniqueName: \"kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605328 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605380 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605615 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605934 5008 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.606880 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.615714 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.617056 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.623025 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66" (OuterVolumeSpecName: "kube-api-access-gxr66") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "kube-api-access-gxr66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.630956 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts" (OuterVolumeSpecName: "scripts") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.634182 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.636911 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718374 5008 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718419 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718432 5008 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718445 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxr66\" (UniqueName: \"kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718460 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718472 5008 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.136938 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.136913 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-phmts" event={"ID":"5b273a50-b2db-40d5-b4b4-6494206c606d","Type":"ContainerDied","Data":"a2a98c18f51d01224109abefa4392158329836c967e1403808990bd7b1c85a20"} Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.137061 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2a98c18f51d01224109abefa4392158329836c967e1403808990bd7b1c85a20" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.139066 5008 generic.go:334] "Generic (PLEG): container finished" podID="207579aa-feff-4069-8fcb-02c5b9cd107f" containerID="d9b41e67155f529dbd273cfba785076257b2721a371f6a0e62d1c4355eb9512a" exitCode=0 Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.139128 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rvpz6" event={"ID":"207579aa-feff-4069-8fcb-02c5b9cd107f","Type":"ContainerDied","Data":"d9b41e67155f529dbd273cfba785076257b2721a371f6a0e62d1c4355eb9512a"} Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.458663 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d79ml"] Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.463110 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d79ml"] Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.571484 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.634435 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts\") pod \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.634502 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpwj8\" (UniqueName: \"kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8\") pod \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.635828 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30bc21a6-d1eb-4200-add0-523a33ffb2ff" (UID: "30bc21a6-d1eb-4200-add0-523a33ffb2ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.649460 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8" (OuterVolumeSpecName: "kube-api-access-gpwj8") pod "30bc21a6-d1eb-4200-add0-523a33ffb2ff" (UID: "30bc21a6-d1eb-4200-add0-523a33ffb2ff"). InnerVolumeSpecName "kube-api-access-gpwj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.736640 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.736676 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpwj8\" (UniqueName: \"kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.759456 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.768141 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.810867 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.822459 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.838559 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts\") pod \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.838778 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwbdf\" (UniqueName: \"kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf\") pod \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.838888 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chrb2\" (UniqueName: \"kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2\") pod \"08da0630-8fe2-4a33-be0c-d81bba67c32c\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.838991 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts\") pod \"08da0630-8fe2-4a33-be0c-d81bba67c32c\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.841563 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08da0630-8fe2-4a33-be0c-d81bba67c32c" (UID: "08da0630-8fe2-4a33-be0c-d81bba67c32c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.844976 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "328d3758-78bd-4a08-b91f-f2f4c9b8b645" (UID: "328d3758-78bd-4a08-b91f-f2f4c9b8b645"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.845585 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf" (OuterVolumeSpecName: "kube-api-access-kwbdf") pod "328d3758-78bd-4a08-b91f-f2f4c9b8b645" (UID: "328d3758-78bd-4a08-b91f-f2f4c9b8b645"). InnerVolumeSpecName "kube-api-access-kwbdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.857046 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2" (OuterVolumeSpecName: "kube-api-access-chrb2") pod "08da0630-8fe2-4a33-be0c-d81bba67c32c" (UID: "08da0630-8fe2-4a33-be0c-d81bba67c32c"). InnerVolumeSpecName "kube-api-access-chrb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.941838 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzbff\" (UniqueName: \"kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff\") pod \"232739d0-09f9-4843-8c9f-fc19bc53763f\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.941948 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts\") pod \"6fd141cd-e623-4692-892c-cf683275d378\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942011 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts\") pod \"232739d0-09f9-4843-8c9f-fc19bc53763f\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942056 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfz4q\" (UniqueName: \"kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q\") pod \"6fd141cd-e623-4692-892c-cf683275d378\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942449 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942477 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwbdf\" (UniqueName: \"kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942494 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chrb2\" (UniqueName: \"kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942508 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942734 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fd141cd-e623-4692-892c-cf683275d378" (UID: "6fd141cd-e623-4692-892c-cf683275d378"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942748 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "232739d0-09f9-4843-8c9f-fc19bc53763f" (UID: "232739d0-09f9-4843-8c9f-fc19bc53763f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.945898 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q" (OuterVolumeSpecName: "kube-api-access-kfz4q") pod "6fd141cd-e623-4692-892c-cf683275d378" (UID: "6fd141cd-e623-4692-892c-cf683275d378"). InnerVolumeSpecName "kube-api-access-kfz4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.945979 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff" (OuterVolumeSpecName: "kube-api-access-lzbff") pod "232739d0-09f9-4843-8c9f-fc19bc53763f" (UID: "232739d0-09f9-4843-8c9f-fc19bc53763f"). InnerVolumeSpecName "kube-api-access-lzbff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.044298 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzbff\" (UniqueName: \"kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.044334 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.044343 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.044351 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfz4q\" (UniqueName: \"kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.147666 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4a04-account-create-update-2cfml" event={"ID":"6fd141cd-e623-4692-892c-cf683275d378","Type":"ContainerDied","Data":"b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff"} Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.148151 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.147702 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.160396 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pggzk" event={"ID":"232739d0-09f9-4843-8c9f-fc19bc53763f","Type":"ContainerDied","Data":"b4483fe57166afcb40a3f3934546faf4535fed5a2e09681d32d851d4837ee7f9"} Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.160457 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4483fe57166afcb40a3f3934546faf4535fed5a2e09681d32d851d4837ee7f9" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.160410 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.163144 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8tpqs" event={"ID":"08da0630-8fe2-4a33-be0c-d81bba67c32c","Type":"ContainerDied","Data":"6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877"} Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.163195 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.163284 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.165406 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0e02-account-create-update-7n7jw" event={"ID":"328d3758-78bd-4a08-b91f-f2f4c9b8b645","Type":"ContainerDied","Data":"34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683"} Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.165436 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.165469 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.168120 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e4e6-account-create-update-6vxmr" event={"ID":"30bc21a6-d1eb-4200-add0-523a33ffb2ff","Type":"ContainerDied","Data":"741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9"} Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.168163 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.168183 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.334517 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="907129fe-50cb-47ef-bbf6-db42cd2ad1ae" path="/var/lib/kubelet/pods/907129fe-50cb-47ef-bbf6-db42cd2ad1ae/volumes" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.521434 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.653197 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts\") pod \"207579aa-feff-4069-8fcb-02c5b9cd107f\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.653252 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42nvh\" (UniqueName: \"kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh\") pod \"207579aa-feff-4069-8fcb-02c5b9cd107f\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.654072 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "207579aa-feff-4069-8fcb-02c5b9cd107f" (UID: "207579aa-feff-4069-8fcb-02c5b9cd107f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.668265 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh" (OuterVolumeSpecName: "kube-api-access-42nvh") pod "207579aa-feff-4069-8fcb-02c5b9cd107f" (UID: "207579aa-feff-4069-8fcb-02c5b9cd107f"). InnerVolumeSpecName "kube-api-access-42nvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.754760 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.754812 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42nvh\" (UniqueName: \"kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:52 crc kubenswrapper[5008]: I0129 15:46:52.177168 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rvpz6" event={"ID":"207579aa-feff-4069-8fcb-02c5b9cd107f","Type":"ContainerDied","Data":"9fe8adf0f447ec158390678253e2d815451e2613c777de440bc0dbb02a7556a8"} Jan 29 15:46:52 crc kubenswrapper[5008]: I0129 15:46:52.177222 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fe8adf0f447ec158390678253e2d815451e2613c777de440bc0dbb02a7556a8" Jan 29 15:46:52 crc kubenswrapper[5008]: I0129 15:46:52.177222 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:53 crc kubenswrapper[5008]: I0129 15:46:53.357385 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bw9wr" podUID="0dd702c8-269b-4fb6-a3a7-03adf93d916a" containerName="ovn-controller" probeResult="failure" output=< Jan 29 15:46:53 crc kubenswrapper[5008]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 15:46:53 crc kubenswrapper[5008]: > Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.475266 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bxxx2"] Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476108 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b273a50-b2db-40d5-b4b4-6494206c606d" containerName="swift-ring-rebalance" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476123 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b273a50-b2db-40d5-b4b4-6494206c606d" containerName="swift-ring-rebalance" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476137 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328d3758-78bd-4a08-b91f-f2f4c9b8b645" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476144 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="328d3758-78bd-4a08-b91f-f2f4c9b8b645" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476174 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232739d0-09f9-4843-8c9f-fc19bc53763f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476183 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="232739d0-09f9-4843-8c9f-fc19bc53763f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476192 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="207579aa-feff-4069-8fcb-02c5b9cd107f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476199 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="207579aa-feff-4069-8fcb-02c5b9cd107f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476211 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907129fe-50cb-47ef-bbf6-db42cd2ad1ae" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476218 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="907129fe-50cb-47ef-bbf6-db42cd2ad1ae" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476230 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08da0630-8fe2-4a33-be0c-d81bba67c32c" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476236 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="08da0630-8fe2-4a33-be0c-d81bba67c32c" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476249 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd141cd-e623-4692-892c-cf683275d378" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476256 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd141cd-e623-4692-892c-cf683275d378" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476271 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bc21a6-d1eb-4200-add0-523a33ffb2ff" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476278 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bc21a6-d1eb-4200-add0-523a33ffb2ff" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476445 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bc21a6-d1eb-4200-add0-523a33ffb2ff" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476460 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="232739d0-09f9-4843-8c9f-fc19bc53763f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476473 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="08da0630-8fe2-4a33-be0c-d81bba67c32c" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476483 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="207579aa-feff-4069-8fcb-02c5b9cd107f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476493 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="907129fe-50cb-47ef-bbf6-db42cd2ad1ae" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476505 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b273a50-b2db-40d5-b4b4-6494206c606d" containerName="swift-ring-rebalance" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476513 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="328d3758-78bd-4a08-b91f-f2f4c9b8b645" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476525 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd141cd-e623-4692-892c-cf683275d378" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.477289 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.479258 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.495247 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bxxx2"] Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.617176 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5wjx\" (UniqueName: \"kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.617605 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.719715 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5wjx\" (UniqueName: \"kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.719976 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.720683 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.740052 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5wjx\" (UniqueName: \"kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.816850 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:56 crc kubenswrapper[5008]: I0129 15:46:56.249902 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bxxx2"] Jan 29 15:46:56 crc kubenswrapper[5008]: W0129 15:46:56.255381 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98c93f6a_d803_4df3_8b35_191cbe683adf.slice/crio-b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074 WatchSource:0}: Error finding container b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074: Status 404 returned error can't find the container with id b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074 Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.217841 5008 generic.go:334] "Generic (PLEG): container finished" podID="98c93f6a-d803-4df3-8b35-191cbe683adf" containerID="88e4435b5bfd1a79780b926cd500b5d39ca87b3e8a648cc8d9d789e4cf17dfd1" exitCode=0 Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.217910 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bxxx2" event={"ID":"98c93f6a-d803-4df3-8b35-191cbe683adf","Type":"ContainerDied","Data":"88e4435b5bfd1a79780b926cd500b5d39ca87b3e8a648cc8d9d789e4cf17dfd1"} Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.218104 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bxxx2" event={"ID":"98c93f6a-d803-4df3-8b35-191cbe683adf","Type":"ContainerStarted","Data":"b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074"} Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.500311 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-n7wgw"] Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.501331 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.503210 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2qq6q" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.503595 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.514444 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-n7wgw"] Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.652734 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6lk\" (UniqueName: \"kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.652832 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.652886 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.652964 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.755348 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m6lk\" (UniqueName: \"kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.755401 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.755437 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.755469 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.762429 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.762438 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.763647 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.775166 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m6lk\" (UniqueName: \"kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.830737 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.365085 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bw9wr" podUID="0dd702c8-269b-4fb6-a3a7-03adf93d916a" containerName="ovn-controller" probeResult="failure" output=< Jan 29 15:46:58 crc kubenswrapper[5008]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 15:46:58 crc kubenswrapper[5008]: > Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.420995 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-n7wgw"] Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.459176 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.461848 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.551462 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.676707 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5wjx\" (UniqueName: \"kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx\") pod \"98c93f6a-d803-4df3-8b35-191cbe683adf\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.676825 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts\") pod \"98c93f6a-d803-4df3-8b35-191cbe683adf\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.678195 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98c93f6a-d803-4df3-8b35-191cbe683adf" (UID: "98c93f6a-d803-4df3-8b35-191cbe683adf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.679446 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bw9wr-config-rv27j"] Jan 29 15:46:58 crc kubenswrapper[5008]: E0129 15:46:58.679894 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c93f6a-d803-4df3-8b35-191cbe683adf" containerName="mariadb-account-create-update" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.679988 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c93f6a-d803-4df3-8b35-191cbe683adf" containerName="mariadb-account-create-update" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.680263 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c93f6a-d803-4df3-8b35-191cbe683adf" containerName="mariadb-account-create-update" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.680745 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.681828 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx" (OuterVolumeSpecName: "kube-api-access-s5wjx") pod "98c93f6a-d803-4df3-8b35-191cbe683adf" (UID: "98c93f6a-d803-4df3-8b35-191cbe683adf"). InnerVolumeSpecName "kube-api-access-s5wjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.682539 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.698708 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bw9wr-config-rv27j"] Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779061 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779102 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779118 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779440 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779668 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779821 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2knfh\" (UniqueName: \"kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.780024 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5wjx\" (UniqueName: \"kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.780056 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.880885 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.880931 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881025 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881096 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881279 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881307 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881324 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881368 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2knfh\" (UniqueName: \"kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881466 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.882402 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.885385 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.899775 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2knfh\" (UniqueName: \"kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.007630 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.240415 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bxxx2" event={"ID":"98c93f6a-d803-4df3-8b35-191cbe683adf","Type":"ContainerDied","Data":"b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074"} Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.240881 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074" Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.240709 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.242017 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n7wgw" event={"ID":"8277eb2b-44f8-4fd9-af92-1832e0272e0e","Type":"ContainerStarted","Data":"b1174780d2fa3fe7c06477c9d106ea7940e8a6e121cc29c7f9f91c93470ca373"} Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.536994 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bw9wr-config-rv27j"] Jan 29 15:47:00 crc kubenswrapper[5008]: I0129 15:47:00.250766 5008 generic.go:334] "Generic (PLEG): container finished" podID="3bbdbac9-d640-400e-a2a1-69c7e09a3211" containerID="1545206f415995f8be0b1d78b3af14329c9b33899a9464b3994d4df802ea1766" exitCode=0 Jan 29 15:47:00 crc kubenswrapper[5008]: I0129 15:47:00.251278 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bw9wr-config-rv27j" event={"ID":"3bbdbac9-d640-400e-a2a1-69c7e09a3211","Type":"ContainerDied","Data":"1545206f415995f8be0b1d78b3af14329c9b33899a9464b3994d4df802ea1766"} Jan 29 15:47:00 crc kubenswrapper[5008]: I0129 15:47:00.251300 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bw9wr-config-rv27j" event={"ID":"3bbdbac9-d640-400e-a2a1-69c7e09a3211","Type":"ContainerStarted","Data":"1d34faa4e6b9bb9b24a255ac43e9f09cd1978cd595d97ab82630d6bbc255082c"} Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.600234 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758164 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2knfh\" (UniqueName: \"kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758235 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758321 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758367 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758402 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758476 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758766 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run" (OuterVolumeSpecName: "var-run") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758867 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758895 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.759282 5008 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.759293 5008 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.759304 5008 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.759833 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.760591 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts" (OuterVolumeSpecName: "scripts") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.764880 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh" (OuterVolumeSpecName: "kube-api-access-2knfh") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "kube-api-access-2knfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.861069 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2knfh\" (UniqueName: \"kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.861373 5008 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.861508 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.064825 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.071497 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.238014 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.269825 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bw9wr-config-rv27j" event={"ID":"3bbdbac9-d640-400e-a2a1-69c7e09a3211","Type":"ContainerDied","Data":"1d34faa4e6b9bb9b24a255ac43e9f09cd1978cd595d97ab82630d6bbc255082c"} Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.269865 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d34faa4e6b9bb9b24a255ac43e9f09cd1978cd595d97ab82630d6bbc255082c" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.269984 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.695143 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bw9wr-config-rv27j"] Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.701111 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bw9wr-config-rv27j"] Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.767547 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 15:47:03 crc kubenswrapper[5008]: I0129 15:47:03.280768 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"80c25143b6f67fe98fdac7a3c17c4bcd0f31a6fa3e14bac09bed0dca8ef6218d"} Jan 29 15:47:03 crc kubenswrapper[5008]: I0129 15:47:03.340649 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bbdbac9-d640-400e-a2a1-69c7e09a3211" path="/var/lib/kubelet/pods/3bbdbac9-d640-400e-a2a1-69c7e09a3211/volumes" Jan 29 15:47:03 crc kubenswrapper[5008]: I0129 15:47:03.364336 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-bw9wr" Jan 29 15:47:03 crc kubenswrapper[5008]: I0129 15:47:03.814045 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.085985 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.125530 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ch7lz"] Jan 29 15:47:04 crc kubenswrapper[5008]: E0129 15:47:04.125872 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bbdbac9-d640-400e-a2a1-69c7e09a3211" containerName="ovn-config" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.125889 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bbdbac9-d640-400e-a2a1-69c7e09a3211" containerName="ovn-config" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.126048 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bbdbac9-d640-400e-a2a1-69c7e09a3211" containerName="ovn-config" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.128377 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.138503 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2158-account-create-update-pjst9"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.139494 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.141611 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.151526 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2158-account-create-update-pjst9"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.158859 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ch7lz"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.226026 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9pkm\" (UniqueName: \"kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.226202 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.226671 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-ls2rz"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.227894 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.247184 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ls2rz"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327218 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327270 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327290 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvf5q\" (UniqueName: \"kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327335 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9pkm\" (UniqueName: \"kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327351 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327393 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljgr6\" (UniqueName: \"kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.328119 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.341790 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-351a-account-create-update-tbrc5"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.343089 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.348072 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.363532 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-351a-account-create-update-tbrc5"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.369523 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9pkm\" (UniqueName: \"kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.421917 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rdpcb"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.423178 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.425385 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sgcvh" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.425652 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.425771 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.430909 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.430965 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.430995 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.431011 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljgr6\" (UniqueName: \"kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.431101 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.431131 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvf5q\" (UniqueName: \"kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.431174 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprzv\" (UniqueName: \"kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.431953 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.432052 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.447039 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rdpcb"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.450665 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvf5q\" (UniqueName: \"kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.453591 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljgr6\" (UniqueName: \"kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.464576 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.473758 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.537206 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-8sctv"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.538356 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.542642 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hprzv\" (UniqueName: \"kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.542753 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.543022 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.543109 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.543166 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xlln\" (UniqueName: \"kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.543775 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.551317 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9316-account-create-update-hpxxq"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.552476 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.555734 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.555970 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.561910 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hprzv\" (UniqueName: \"kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.564645 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8sctv"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.576961 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9316-account-create-update-hpxxq"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.644999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xlln\" (UniqueName: \"kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645100 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645142 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645157 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj4h5\" (UniqueName: \"kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645182 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2878\" (UniqueName: \"kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645212 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645240 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.649095 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.657696 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.660196 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xlln\" (UniqueName: \"kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.663145 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.746277 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.746338 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.746356 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj4h5\" (UniqueName: \"kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.746379 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2878\" (UniqueName: \"kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.747597 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.748186 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.747934 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.761802 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2878\" (UniqueName: \"kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.775269 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj4h5\" (UniqueName: \"kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.858128 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.920991 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:23 crc kubenswrapper[5008]: E0129 15:47:23.399707 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 29 15:47:23 crc kubenswrapper[5008]: E0129 15:47:23.402108 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9m6lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-n7wgw_openstack(8277eb2b-44f8-4fd9-af92-1832e0272e0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:47:23 crc kubenswrapper[5008]: E0129 15:47:23.403308 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-n7wgw" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" Jan 29 15:47:23 crc kubenswrapper[5008]: E0129 15:47:23.563043 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-n7wgw" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.023869 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rdpcb"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.048635 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9316-account-create-update-hpxxq"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.056599 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ls2rz"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.062362 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8sctv"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.067965 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2158-account-create-update-pjst9"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.075104 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ch7lz"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.082595 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-351a-account-create-update-tbrc5"] Jan 29 15:47:24 crc kubenswrapper[5008]: W0129 15:47:24.175630 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75706daa_3e40_4bbe_bb1b_44120719d48d.slice/crio-42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c WatchSource:0}: Error finding container 42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c: Status 404 returned error can't find the container with id 42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c Jan 29 15:47:24 crc kubenswrapper[5008]: W0129 15:47:24.204716 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc0f9ba_13f2_4092_b3e4_a5744ae24174.slice/crio-e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2 WatchSource:0}: Error finding container e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2: Status 404 returned error can't find the container with id e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2 Jan 29 15:47:24 crc kubenswrapper[5008]: W0129 15:47:24.205276 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod826ac6d8_e950_4bd5_b5f4_0d3f5be5b960.slice/crio-627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47 WatchSource:0}: Error finding container 627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47: Status 404 returned error can't find the container with id 627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47 Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.567144 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8sctv" event={"ID":"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e","Type":"ContainerStarted","Data":"7ca75479ef338f89bd18ce28569eaa84b3102801c80c2efffac33fec97763ec5"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.568550 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9316-account-create-update-hpxxq" event={"ID":"bbc0f9ba-13f2-4092-b3e4-a5744ae24174","Type":"ContainerStarted","Data":"e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.569368 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdpcb" event={"ID":"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183","Type":"ContainerStarted","Data":"e523bacd3d00d7c299e8d1ee84b44f3d8235fdd0edd8465f7b1e2360b0719fb8"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.571035 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2158-account-create-update-pjst9" event={"ID":"0494524d-f73e-4534-9064-b578d41bea87","Type":"ContainerStarted","Data":"f7337579b0c05cef5036ba373b06ec94f4c86859c74c4cf38a1a6c866cfa3d5e"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.571070 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2158-account-create-update-pjst9" event={"ID":"0494524d-f73e-4534-9064-b578d41bea87","Type":"ContainerStarted","Data":"94f893ff8af23a8830de458746a4ab5e3bf3e11dbeefac60089754522f1ff45b"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.574589 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ch7lz" event={"ID":"75706daa-3e40-4bbe-bb1b-44120719d48d","Type":"ContainerStarted","Data":"6c61687e12f73c515f558a6a4b2824cb17762d52f0bf2ebbaaed1f1b074de225"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.574645 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ch7lz" event={"ID":"75706daa-3e40-4bbe-bb1b-44120719d48d","Type":"ContainerStarted","Data":"42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.575896 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-351a-account-create-update-tbrc5" event={"ID":"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960","Type":"ContainerStarted","Data":"627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.577423 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ls2rz" event={"ID":"36bf973b-f73a-425e-9923-09caa2622a41","Type":"ContainerStarted","Data":"ed5cc6ce99bd405e3383395a42bb5c67b67109276e849e2857a96654dfe667f0"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.590763 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2158-account-create-update-pjst9" podStartSLOduration=20.590742163 podStartE2EDuration="20.590742163s" podCreationTimestamp="2026-01-29 15:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:47:24.584370058 +0000 UTC m=+1188.257224295" watchObservedRunningTime="2026-01-29 15:47:24.590742163 +0000 UTC m=+1188.263596400" Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.605020 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-ch7lz" podStartSLOduration=20.605001019 podStartE2EDuration="20.605001019s" podCreationTimestamp="2026-01-29 15:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:47:24.600329266 +0000 UTC m=+1188.273183523" watchObservedRunningTime="2026-01-29 15:47:24.605001019 +0000 UTC m=+1188.277855256" Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.592629 5008 generic.go:334] "Generic (PLEG): container finished" podID="826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" containerID="ca99078315f1792020893b0155199b35cf28a5d2e22b71f951d215c87d9c1097" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.592845 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-351a-account-create-update-tbrc5" event={"ID":"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960","Type":"ContainerDied","Data":"ca99078315f1792020893b0155199b35cf28a5d2e22b71f951d215c87d9c1097"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.595561 5008 generic.go:334] "Generic (PLEG): container finished" podID="36bf973b-f73a-425e-9923-09caa2622a41" containerID="64cf9712b9a6a018d4f38c41a288a8f15705222afe6688de0979f4ea4ab02893" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.595605 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ls2rz" event={"ID":"36bf973b-f73a-425e-9923-09caa2622a41","Type":"ContainerDied","Data":"64cf9712b9a6a018d4f38c41a288a8f15705222afe6688de0979f4ea4ab02893"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.605087 5008 generic.go:334] "Generic (PLEG): container finished" podID="4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" containerID="e3f4a0bf80eb8c9f3329a22ef35badafd100d8a972517b1491615c6612a7b55a" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.605215 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8sctv" event={"ID":"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e","Type":"ContainerDied","Data":"e3f4a0bf80eb8c9f3329a22ef35badafd100d8a972517b1491615c6612a7b55a"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.612455 5008 generic.go:334] "Generic (PLEG): container finished" podID="bbc0f9ba-13f2-4092-b3e4-a5744ae24174" containerID="6f05c53cf48d2a332db38d95de29d8cfb8a983e457e1d6fed6a77e002f9f5183" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.612552 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9316-account-create-update-hpxxq" event={"ID":"bbc0f9ba-13f2-4092-b3e4-a5744ae24174","Type":"ContainerDied","Data":"6f05c53cf48d2a332db38d95de29d8cfb8a983e457e1d6fed6a77e002f9f5183"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.616301 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"2212529dd2f325960b0a75d9f75f86cf2ff6a278a3f594a0528f1f59cdb29f95"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.616451 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"cbb09b30f2da85dabef49da1927febd1bd6890e6db3d10092cebb71cfa1da299"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.616544 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"25611f8a32294d584338b6ed28f48d7d0cbad43cf19e86aa5d7009d821a5705e"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.616629 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"1a9b0307771a31787dd09578530f5d5331db12304403edf1b9227795cf40f412"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.619213 5008 generic.go:334] "Generic (PLEG): container finished" podID="0494524d-f73e-4534-9064-b578d41bea87" containerID="f7337579b0c05cef5036ba373b06ec94f4c86859c74c4cf38a1a6c866cfa3d5e" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.619293 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2158-account-create-update-pjst9" event={"ID":"0494524d-f73e-4534-9064-b578d41bea87","Type":"ContainerDied","Data":"f7337579b0c05cef5036ba373b06ec94f4c86859c74c4cf38a1a6c866cfa3d5e"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.624064 5008 generic.go:334] "Generic (PLEG): container finished" podID="75706daa-3e40-4bbe-bb1b-44120719d48d" containerID="6c61687e12f73c515f558a6a4b2824cb17762d52f0bf2ebbaaed1f1b074de225" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.624143 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ch7lz" event={"ID":"75706daa-3e40-4bbe-bb1b-44120719d48d","Type":"ContainerDied","Data":"6c61687e12f73c515f558a6a4b2824cb17762d52f0bf2ebbaaed1f1b074de225"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.650875 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.651842 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ch7lz" event={"ID":"75706daa-3e40-4bbe-bb1b-44120719d48d","Type":"ContainerDied","Data":"42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.651886 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.654226 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-351a-account-create-update-tbrc5" event={"ID":"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960","Type":"ContainerDied","Data":"627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.654253 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.655776 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ls2rz" event={"ID":"36bf973b-f73a-425e-9923-09caa2622a41","Type":"ContainerDied","Data":"ed5cc6ce99bd405e3383395a42bb5c67b67109276e849e2857a96654dfe667f0"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.655824 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed5cc6ce99bd405e3383395a42bb5c67b67109276e849e2857a96654dfe667f0" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.657083 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.657892 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8sctv" event={"ID":"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e","Type":"ContainerDied","Data":"7ca75479ef338f89bd18ce28569eaa84b3102801c80c2efffac33fec97763ec5"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.657927 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ca75479ef338f89bd18ce28569eaa84b3102801c80c2efffac33fec97763ec5" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.659566 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9316-account-create-update-hpxxq" event={"ID":"bbc0f9ba-13f2-4092-b3e4-a5744ae24174","Type":"ContainerDied","Data":"e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.659593 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.659636 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.661663 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2158-account-create-update-pjst9" event={"ID":"0494524d-f73e-4534-9064-b578d41bea87","Type":"ContainerDied","Data":"94f893ff8af23a8830de458746a4ab5e3bf3e11dbeefac60089754522f1ff45b"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.661690 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f893ff8af23a8830de458746a4ab5e3bf3e11dbeefac60089754522f1ff45b" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.673518 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.719303 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.725678 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.744306 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804432 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2878\" (UniqueName: \"kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878\") pod \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804492 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts\") pod \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804542 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts\") pod \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804668 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj4h5\" (UniqueName: \"kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5\") pod \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804693 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hprzv\" (UniqueName: \"kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv\") pod \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804775 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts\") pod \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.805346 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbc0f9ba-13f2-4092-b3e4-a5744ae24174" (UID: "bbc0f9ba-13f2-4092-b3e4-a5744ae24174"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.805348 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" (UID: "826ac6d8-e950-4bd5-b5f4-0d3f5be5b960"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.805417 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" (UID: "4256c8e0-3a7b-43fd-9ad4-23b2495bc92e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.808363 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5" (OuterVolumeSpecName: "kube-api-access-xj4h5") pod "bbc0f9ba-13f2-4092-b3e4-a5744ae24174" (UID: "bbc0f9ba-13f2-4092-b3e4-a5744ae24174"). InnerVolumeSpecName "kube-api-access-xj4h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.808657 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878" (OuterVolumeSpecName: "kube-api-access-z2878") pod "4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" (UID: "4256c8e0-3a7b-43fd-9ad4-23b2495bc92e"). InnerVolumeSpecName "kube-api-access-z2878". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.814760 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv" (OuterVolumeSpecName: "kube-api-access-hprzv") pod "826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" (UID: "826ac6d8-e950-4bd5-b5f4-0d3f5be5b960"). InnerVolumeSpecName "kube-api-access-hprzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906071 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts\") pod \"75706daa-3e40-4bbe-bb1b-44120719d48d\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906141 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts\") pod \"36bf973b-f73a-425e-9923-09caa2622a41\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906167 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvf5q\" (UniqueName: \"kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q\") pod \"0494524d-f73e-4534-9064-b578d41bea87\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906190 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts\") pod \"0494524d-f73e-4534-9064-b578d41bea87\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906276 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9pkm\" (UniqueName: \"kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm\") pod \"75706daa-3e40-4bbe-bb1b-44120719d48d\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906315 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljgr6\" (UniqueName: \"kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6\") pod \"36bf973b-f73a-425e-9923-09caa2622a41\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906503 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75706daa-3e40-4bbe-bb1b-44120719d48d" (UID: "75706daa-3e40-4bbe-bb1b-44120719d48d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906536 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36bf973b-f73a-425e-9923-09caa2622a41" (UID: "36bf973b-f73a-425e-9923-09caa2622a41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906823 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906852 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2878\" (UniqueName: \"kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906868 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906880 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906893 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj4h5\" (UniqueName: \"kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906866 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0494524d-f73e-4534-9064-b578d41bea87" (UID: "0494524d-f73e-4534-9064-b578d41bea87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.907383 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.907409 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hprzv\" (UniqueName: \"kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.907421 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.909817 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q" (OuterVolumeSpecName: "kube-api-access-qvf5q") pod "0494524d-f73e-4534-9064-b578d41bea87" (UID: "0494524d-f73e-4534-9064-b578d41bea87"). InnerVolumeSpecName "kube-api-access-qvf5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.909969 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm" (OuterVolumeSpecName: "kube-api-access-f9pkm") pod "75706daa-3e40-4bbe-bb1b-44120719d48d" (UID: "75706daa-3e40-4bbe-bb1b-44120719d48d"). InnerVolumeSpecName "kube-api-access-f9pkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.910990 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6" (OuterVolumeSpecName: "kube-api-access-ljgr6") pod "36bf973b-f73a-425e-9923-09caa2622a41" (UID: "36bf973b-f73a-425e-9923-09caa2622a41"). InnerVolumeSpecName "kube-api-access-ljgr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.008517 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvf5q\" (UniqueName: \"kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.009192 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.009213 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9pkm\" (UniqueName: \"kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.009224 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljgr6\" (UniqueName: \"kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.676141 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"d023f0a06c3a4858a00f2e869c3a0dbb0bed1aa0a84b387042d32627a5131e98"} Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.677206 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"9fbf7b1ddf1641b2b56def51b3cd15d59889fb82eeab0a92495fb54fa70a3584"} Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.677355 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"b7a2845d3f78241dfae6156f9b58f3be79eb1b7aeaadc6035f50335680bc6960"} Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.677496 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"5055f3c6cc3af28f8f53be3a562c6490dbaf97f77ac697e5466544ec9a05d491"} Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678143 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678320 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678541 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678581 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678619 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678628 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdpcb" event={"ID":"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183","Type":"ContainerStarted","Data":"eacc0139ac8b112a9da7c9f07cae68774d1d37d4498b8a7bcd2ca73c4e6b805f"} Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.712771 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rdpcb" podStartSLOduration=21.463216666 podStartE2EDuration="25.712744574s" podCreationTimestamp="2026-01-29 15:47:04 +0000 UTC" firstStartedPulling="2026-01-29 15:47:24.187114781 +0000 UTC m=+1187.859969018" lastFinishedPulling="2026-01-29 15:47:28.436642689 +0000 UTC m=+1192.109496926" observedRunningTime="2026-01-29 15:47:29.700065037 +0000 UTC m=+1193.372919294" watchObservedRunningTime="2026-01-29 15:47:29.712744574 +0000 UTC m=+1193.385598821" Jan 29 15:47:31 crc kubenswrapper[5008]: I0129 15:47:31.700998 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"c42fa5ec399e9df0cd5d9503d61de7bf9bdcb5b5027dcd02746f8446fed7da66"} Jan 29 15:47:31 crc kubenswrapper[5008]: I0129 15:47:31.701293 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"9eb9b05b0bef55e69bf25de1e5d402963b23a145e9cd5e7bf113c41de93a6318"} Jan 29 15:47:31 crc kubenswrapper[5008]: I0129 15:47:31.701308 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"fda414e92a3ca3aface69b1a9c98558a6ec8b9c8d878064054f64fa3507d1b0c"} Jan 29 15:47:32 crc kubenswrapper[5008]: I0129 15:47:32.714555 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"6cf28e6fc3cff5bdc43980966f3928eb9a5c5615d1c60550b291c734697d20c8"} Jan 29 15:47:33 crc kubenswrapper[5008]: I0129 15:47:33.734143 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"9fb6e0bee1283670a65e1e295200557f9a262303b6d0de045b04513bb4e07886"} Jan 29 15:47:35 crc kubenswrapper[5008]: I0129 15:47:35.755075 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"dcd568c6c622d136e4a94c3dc4bc9021d6aa1b554bf5fac44a2f31e5ba6c5c56"} Jan 29 15:47:37 crc kubenswrapper[5008]: I0129 15:47:37.774842 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"a37e87d63e7a4f5cd475c5cc437007014e64b560a242462428fe61e6e7ca18ad"} Jan 29 15:47:37 crc kubenswrapper[5008]: I0129 15:47:37.835927 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.617522106 podStartE2EDuration="1m8.835909978s" podCreationTimestamp="2026-01-29 15:46:29 +0000 UTC" firstStartedPulling="2026-01-29 15:47:02.78615543 +0000 UTC m=+1166.459009667" lastFinishedPulling="2026-01-29 15:47:31.004543302 +0000 UTC m=+1194.677397539" observedRunningTime="2026-01-29 15:47:37.826699344 +0000 UTC m=+1201.499553641" watchObservedRunningTime="2026-01-29 15:47:37.835909978 +0000 UTC m=+1201.508764205" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.134863 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135168 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135180 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135198 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135204 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135217 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75706daa-3e40-4bbe-bb1b-44120719d48d" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135223 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="75706daa-3e40-4bbe-bb1b-44120719d48d" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135235 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bf973b-f73a-425e-9923-09caa2622a41" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135241 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bf973b-f73a-425e-9923-09caa2622a41" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135254 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0494524d-f73e-4534-9064-b578d41bea87" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135260 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0494524d-f73e-4534-9064-b578d41bea87" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135273 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc0f9ba-13f2-4092-b3e4-a5744ae24174" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135280 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc0f9ba-13f2-4092-b3e4-a5744ae24174" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135440 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="75706daa-3e40-4bbe-bb1b-44120719d48d" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135456 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc0f9ba-13f2-4092-b3e4-a5744ae24174" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135464 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="36bf973b-f73a-425e-9923-09caa2622a41" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135473 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0494524d-f73e-4534-9064-b578d41bea87" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135480 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135489 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.136228 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.139973 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.159409 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.222693 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.222770 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.222875 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.222902 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsqfw\" (UniqueName: \"kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.223011 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.223143 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324589 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324635 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsqfw\" (UniqueName: \"kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324656 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324682 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324752 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324819 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.325671 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.325757 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.325864 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.325917 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.326177 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.348184 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsqfw\" (UniqueName: \"kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.456236 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:43 crc kubenswrapper[5008]: I0129 15:47:43.990438 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:47:43 crc kubenswrapper[5008]: I0129 15:47:43.991158 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:48:01 crc kubenswrapper[5008]: I0129 15:48:01.972088 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:48:02 crc kubenswrapper[5008]: I0129 15:48:02.006609 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" event={"ID":"1d24d44a-1e0f-43ea-a065-9c4f369e0045","Type":"ContainerStarted","Data":"ce4f811545cec808190704383cf9c2a75b48fb0966a323612a8e888c6a8f70bd"} Jan 29 15:48:03 crc kubenswrapper[5008]: I0129 15:48:03.037680 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerID="083f5bd0f3b73b9e5442787b14d42aed7700b0e82373d83000e080c51c1d585e" exitCode=0 Jan 29 15:48:03 crc kubenswrapper[5008]: I0129 15:48:03.037896 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" event={"ID":"1d24d44a-1e0f-43ea-a065-9c4f369e0045","Type":"ContainerDied","Data":"083f5bd0f3b73b9e5442787b14d42aed7700b0e82373d83000e080c51c1d585e"} Jan 29 15:48:03 crc kubenswrapper[5008]: I0129 15:48:03.041134 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n7wgw" event={"ID":"8277eb2b-44f8-4fd9-af92-1832e0272e0e","Type":"ContainerStarted","Data":"bde50669bd65351b30c48ee0e65fb0911aba9f1d7624eae95461658432ebf883"} Jan 29 15:48:03 crc kubenswrapper[5008]: I0129 15:48:03.084637 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-n7wgw" podStartSLOduration=2.993305539 podStartE2EDuration="1m6.084613s" podCreationTimestamp="2026-01-29 15:46:57 +0000 UTC" firstStartedPulling="2026-01-29 15:46:58.426948702 +0000 UTC m=+1162.099802959" lastFinishedPulling="2026-01-29 15:48:01.518256183 +0000 UTC m=+1225.191110420" observedRunningTime="2026-01-29 15:48:03.079871855 +0000 UTC m=+1226.752726082" watchObservedRunningTime="2026-01-29 15:48:03.084613 +0000 UTC m=+1226.757467267" Jan 29 15:48:04 crc kubenswrapper[5008]: I0129 15:48:04.055918 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" event={"ID":"1d24d44a-1e0f-43ea-a065-9c4f369e0045","Type":"ContainerStarted","Data":"8c955580cc84bdb7c729644dacf0097c59885b458cef63ff2bf7694209b8b51b"} Jan 29 15:48:04 crc kubenswrapper[5008]: I0129 15:48:04.057972 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:48:04 crc kubenswrapper[5008]: I0129 15:48:04.088523 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podStartSLOduration=26.088492343 podStartE2EDuration="26.088492343s" podCreationTimestamp="2026-01-29 15:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:48:04.082430285 +0000 UTC m=+1227.755284552" watchObservedRunningTime="2026-01-29 15:48:04.088492343 +0000 UTC m=+1227.761346650" Jan 29 15:48:07 crc kubenswrapper[5008]: I0129 15:48:07.084475 5008 generic.go:334] "Generic (PLEG): container finished" podID="4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" containerID="eacc0139ac8b112a9da7c9f07cae68774d1d37d4498b8a7bcd2ca73c4e6b805f" exitCode=0 Jan 29 15:48:07 crc kubenswrapper[5008]: I0129 15:48:07.084608 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdpcb" event={"ID":"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183","Type":"ContainerDied","Data":"eacc0139ac8b112a9da7c9f07cae68774d1d37d4498b8a7bcd2ca73c4e6b805f"} Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.472451 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.487506 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.531974 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.532280 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="dnsmasq-dns" containerID="cri-o://ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831" gracePeriod=10 Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.669021 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xlln\" (UniqueName: \"kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln\") pod \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.669112 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle\") pod \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.669301 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data\") pod \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.678762 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln" (OuterVolumeSpecName: "kube-api-access-6xlln") pod "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" (UID: "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183"). InnerVolumeSpecName "kube-api-access-6xlln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.701543 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" (UID: "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.721615 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data" (OuterVolumeSpecName: "config-data") pod "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" (UID: "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.770593 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.770626 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xlln\" (UniqueName: \"kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.770642 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.938054 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.075659 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config\") pod \"536998c7-ad3f-4b4c-ad9e-342343eded97\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.075773 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb\") pod \"536998c7-ad3f-4b4c-ad9e-342343eded97\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.075815 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb\") pod \"536998c7-ad3f-4b4c-ad9e-342343eded97\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.075914 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsqq2\" (UniqueName: \"kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2\") pod \"536998c7-ad3f-4b4c-ad9e-342343eded97\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.075942 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc\") pod \"536998c7-ad3f-4b4c-ad9e-342343eded97\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.087745 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2" (OuterVolumeSpecName: "kube-api-access-qsqq2") pod "536998c7-ad3f-4b4c-ad9e-342343eded97" (UID: "536998c7-ad3f-4b4c-ad9e-342343eded97"). InnerVolumeSpecName "kube-api-access-qsqq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.104052 5008 generic.go:334] "Generic (PLEG): container finished" podID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerID="ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831" exitCode=0 Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.104122 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.104158 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" event={"ID":"536998c7-ad3f-4b4c-ad9e-342343eded97","Type":"ContainerDied","Data":"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831"} Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.104213 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" event={"ID":"536998c7-ad3f-4b4c-ad9e-342343eded97","Type":"ContainerDied","Data":"e0537e06f45058060e30f1ea912f4b791f0f50a83a241274268db34f9a3ef7fc"} Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.104231 5008 scope.go:117] "RemoveContainer" containerID="ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.106535 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdpcb" event={"ID":"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183","Type":"ContainerDied","Data":"e523bacd3d00d7c299e8d1ee84b44f3d8235fdd0edd8465f7b1e2360b0719fb8"} Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.106566 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e523bacd3d00d7c299e8d1ee84b44f3d8235fdd0edd8465f7b1e2360b0719fb8" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.106616 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.116036 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "536998c7-ad3f-4b4c-ad9e-342343eded97" (UID: "536998c7-ad3f-4b4c-ad9e-342343eded97"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.128314 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "536998c7-ad3f-4b4c-ad9e-342343eded97" (UID: "536998c7-ad3f-4b4c-ad9e-342343eded97"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.137546 5008 scope.go:117] "RemoveContainer" containerID="01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.139473 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config" (OuterVolumeSpecName: "config") pod "536998c7-ad3f-4b4c-ad9e-342343eded97" (UID: "536998c7-ad3f-4b4c-ad9e-342343eded97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.168534 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "536998c7-ad3f-4b4c-ad9e-342343eded97" (UID: "536998c7-ad3f-4b4c-ad9e-342343eded97"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.178346 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsqq2\" (UniqueName: \"kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.178400 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.178411 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.178420 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.178431 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.188313 5008 scope.go:117] "RemoveContainer" containerID="ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831" Jan 29 15:48:09 crc kubenswrapper[5008]: E0129 15:48:09.188812 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831\": container with ID starting with ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831 not found: ID does not exist" containerID="ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.188850 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831"} err="failed to get container status \"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831\": rpc error: code = NotFound desc = could not find container \"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831\": container with ID starting with ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831 not found: ID does not exist" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.188875 5008 scope.go:117] "RemoveContainer" containerID="01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94" Jan 29 15:48:09 crc kubenswrapper[5008]: E0129 15:48:09.189185 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94\": container with ID starting with 01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94 not found: ID does not exist" containerID="01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.189219 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94"} err="failed to get container status \"01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94\": rpc error: code = NotFound desc = could not find container \"01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94\": container with ID starting with 01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94 not found: ID does not exist" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334398 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:09 crc kubenswrapper[5008]: E0129 15:48:09.334637 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="dnsmasq-dns" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334649 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="dnsmasq-dns" Jan 29 15:48:09 crc kubenswrapper[5008]: E0129 15:48:09.334660 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="init" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334666 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="init" Jan 29 15:48:09 crc kubenswrapper[5008]: E0129 15:48:09.334691 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" containerName="keystone-db-sync" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334696 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" containerName="keystone-db-sync" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334842 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" containerName="keystone-db-sync" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334855 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="dnsmasq-dns" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.335600 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.345312 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-b8gfd"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.348774 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.362491 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.362850 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.364539 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.365521 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.367037 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sgcvh" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.376364 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b8gfd"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.470246 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505763 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btnzd\" (UniqueName: \"kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505826 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505866 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505916 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505941 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g58mz\" (UniqueName: \"kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505966 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.506002 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.506050 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.506077 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.506090 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.506104 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.538252 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.539665 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.545402 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-8svhc" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.545585 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.545943 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.546194 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.554045 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.568864 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.577766 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.589131 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-fwhd5"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.590146 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.600470 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.601325 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x6pwm" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.601517 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.607942 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.607997 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608031 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g58mz\" (UniqueName: \"kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608056 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608073 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608094 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608131 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608155 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608171 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608200 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608222 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btnzd\" (UniqueName: \"kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608241 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.609573 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.609732 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.610085 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.613009 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.613683 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.619841 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.620207 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.635548 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.640508 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.642557 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.651397 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g58mz\" (UniqueName: \"kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.667342 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fwhd5"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.692452 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btnzd\" (UniqueName: \"kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.703940 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.704527 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709252 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709307 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709331 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709345 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jtf7\" (UniqueName: \"kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709370 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709402 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709448 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709474 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709493 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709515 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b5fh\" (UniqueName: \"kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709539 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.731929 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rcl2z"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.733262 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.737858 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wg4h5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.738171 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.771870 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rcl2z"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.773948 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.788766 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tqc26"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.790029 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.793403 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.793586 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.793487 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rlqfr" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.817772 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.828523 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.829398 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830166 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830424 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830455 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jtf7\" (UniqueName: \"kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830494 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830564 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830590 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830708 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830788 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831302 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831609 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831650 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b5fh\" (UniqueName: \"kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831681 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jftb\" (UniqueName: \"kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831711 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831767 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831805 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.840207 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.841458 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.850533 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.854697 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.856998 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.857511 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.863245 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.870306 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.880923 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b5fh\" (UniqueName: \"kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.881523 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jtf7\" (UniqueName: \"kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.898856 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tqc26"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.932962 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4h8lc"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934008 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934224 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jftb\" (UniqueName: \"kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934358 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934465 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934583 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934670 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934742 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934865 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934950 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.935020 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.935108 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.935207 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hqb9\" (UniqueName: \"kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.963186 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.963239 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.963292 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62fxx\" (UniqueName: \"kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.957023 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4h8lc"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.941243 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.954146 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.954424 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.941364 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.941761 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qg4fq" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.001172 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.003169 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.007847 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.008193 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jftb\" (UniqueName: \"kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.031946 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064701 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfh8n\" (UniqueName: \"kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064760 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064802 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064838 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064857 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064895 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmvz6\" (UniqueName: \"kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064914 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064933 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064958 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065014 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hqb9\" (UniqueName: \"kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065042 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065058 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065079 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065101 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065117 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065134 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62fxx\" (UniqueName: \"kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065171 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065201 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065224 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065683 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065842 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.066039 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.066390 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.066399 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.066557 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.071139 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.071374 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.077118 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.079021 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.080892 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.087062 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.087908 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.097260 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.097592 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.105748 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62fxx\" (UniqueName: \"kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.106018 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hqb9\" (UniqueName: \"kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.153807 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.167528 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170142 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170227 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmvz6\" (UniqueName: \"kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170280 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngjqg\" (UniqueName: \"kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170321 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170354 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170427 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170454 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170476 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170509 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170540 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170613 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170644 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170693 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfh8n\" (UniqueName: \"kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170730 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170797 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.171624 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.173140 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.176859 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.182841 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.183296 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.204632 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfh8n\" (UniqueName: \"kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.206195 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmvz6\" (UniqueName: \"kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.207015 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.209447 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272613 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272682 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngjqg\" (UniqueName: \"kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272706 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272748 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272767 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272840 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272873 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.273670 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.273803 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.277063 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.282637 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.283749 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.284195 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.288060 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngjqg\" (UniqueName: \"kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.309952 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.364231 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.420921 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.423514 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:10 crc kubenswrapper[5008]: W0129 15:48:10.462969 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32d4f252_93b9_4d91_9501_7fac414b7b47.slice/crio-1acb032ed25ef12c73d855be7174e50b33e647d61af0aafcd05a6e8ee53ae527 WatchSource:0}: Error finding container 1acb032ed25ef12c73d855be7174e50b33e647d61af0aafcd05a6e8ee53ae527: Status 404 returned error can't find the container with id 1acb032ed25ef12c73d855be7174e50b33e647d61af0aafcd05a6e8ee53ae527 Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.589544 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b8gfd"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.731208 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fwhd5"] Jan 29 15:48:10 crc kubenswrapper[5008]: W0129 15:48:10.740815 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9069f34b_ed91_4ced_8b05_91b83dd02938.slice/crio-87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6 WatchSource:0}: Error finding container 87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6: Status 404 returned error can't find the container with id 87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6 Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.837377 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tqc26"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.842466 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rcl2z"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.983339 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4h8lc"] Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.000599 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:48:11 crc kubenswrapper[5008]: W0129 15:48:11.002952 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e9e19dd_550a_467d_bd79_03ee07c2f470.slice/crio-1409d01f2c501abf5116a293f455b4ede7359b5dd6f401ad59f4bc1ff5e27560 WatchSource:0}: Error finding container 1409d01f2c501abf5116a293f455b4ede7359b5dd6f401ad59f4bc1ff5e27560: Status 404 returned error can't find the container with id 1409d01f2c501abf5116a293f455b4ede7359b5dd6f401ad59f4bc1ff5e27560 Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.013191 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.144791 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" event={"ID":"771d4fdc-7731-4bfc-a65a-7c3b8624eb32","Type":"ContainerStarted","Data":"0855c1b3124d74f066ce8585049d7c108a1ae142bfe48dd2fe48b76c9a87b4b0"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.146945 5008 generic.go:334] "Generic (PLEG): container finished" podID="32d4f252-93b9-4d91-9501-7fac414b7b47" containerID="162e7c392841dddbcd1aa2020766cf167422ce4a22d288e65690e63fcf74ed9c" exitCode=0 Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.147112 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-l96nk" event={"ID":"32d4f252-93b9-4d91-9501-7fac414b7b47","Type":"ContainerDied","Data":"162e7c392841dddbcd1aa2020766cf167422ce4a22d288e65690e63fcf74ed9c"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.147205 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-l96nk" event={"ID":"32d4f252-93b9-4d91-9501-7fac414b7b47","Type":"ContainerStarted","Data":"1acb032ed25ef12c73d855be7174e50b33e647d61af0aafcd05a6e8ee53ae527"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.151535 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8gfd" event={"ID":"f8408515-bbd2-46aa-b98f-a331b6659aa8","Type":"ContainerStarted","Data":"82015428914e1b8d83489174480b3a04643dbd25b377d65c00407eb4dfbc5a91"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.151647 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8gfd" event={"ID":"f8408515-bbd2-46aa-b98f-a331b6659aa8","Type":"ContainerStarted","Data":"405bad21fefa05b3e90ec899e50725ce7823c20297242fc88f79da9c15e44ffd"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.153291 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fwhd5" event={"ID":"9069f34b-ed91-4ced-8b05-91b83dd02938","Type":"ContainerStarted","Data":"87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.154172 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqc26" event={"ID":"c3a233d5-bf7f-4906-881c-5e81ea64e0e8","Type":"ContainerStarted","Data":"7463a1c0c912427b5643e45ef8f082d31f897a9969a145430140c8f0d851f2fa"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.155270 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f4589f77-j49wf" event={"ID":"8e9e19dd-550a-467d-bd79-03ee07c2f470","Type":"ContainerStarted","Data":"1409d01f2c501abf5116a293f455b4ede7359b5dd6f401ad59f4bc1ff5e27560"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.158020 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4h8lc" event={"ID":"6c2a1a18-16ff-4419-b233-8649579edbea","Type":"ContainerStarted","Data":"07e336009f3d0d4bad7a27492f349aabeb9348d525d8a5111ca33499deca9afe"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.159414 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rcl2z" event={"ID":"4ec0e696-652d-463e-b97e-dad0065a543b","Type":"ContainerStarted","Data":"748398d1ff4ce764be647594fea290f65e925f9a2636d8aeb85a205a07c6aff2"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.178155 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.194214 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-b8gfd" podStartSLOduration=2.194175825 podStartE2EDuration="2.194175825s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:48:11.193483138 +0000 UTC m=+1234.866337375" watchObservedRunningTime="2026-01-29 15:48:11.194175825 +0000 UTC m=+1234.867030062" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.284047 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:48:11 crc kubenswrapper[5008]: W0129 15:48:11.286940 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b110ddf_5eea_4e32_b9f3_f07886d636a2.slice/crio-edb6a5e3eecc88a8d2bbfb0fdbece87ea6a4b28d555c22d32a7db25bc8e06e84 WatchSource:0}: Error finding container edb6a5e3eecc88a8d2bbfb0fdbece87ea6a4b28d555c22d32a7db25bc8e06e84: Status 404 returned error can't find the container with id edb6a5e3eecc88a8d2bbfb0fdbece87ea6a4b28d555c22d32a7db25bc8e06e84 Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.359894 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" path="/var/lib/kubelet/pods/536998c7-ad3f-4b4c-ad9e-342343eded97/volumes" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.517882 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598231 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598336 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598385 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598458 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598514 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598540 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g58mz\" (UniqueName: \"kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.609608 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz" (OuterVolumeSpecName: "kube-api-access-g58mz") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "kube-api-access-g58mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.620314 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.626177 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config" (OuterVolumeSpecName: "config") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.642723 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.646150 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.655627 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700404 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700447 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700464 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700476 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700487 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g58mz\" (UniqueName: \"kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700499 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.947037 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.990007 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:48:11 crc kubenswrapper[5008]: E0129 15:48:11.990370 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d4f252-93b9-4d91-9501-7fac414b7b47" containerName="init" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.990383 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d4f252-93b9-4d91-9501-7fac414b7b47" containerName="init" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.990534 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d4f252-93b9-4d91-9501-7fac414b7b47" containerName="init" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.991662 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.013233 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.013466 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnq52\" (UniqueName: \"kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.013563 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.013631 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.013728 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.014066 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.055224 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117062 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117122 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnq52\" (UniqueName: \"kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117163 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117188 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117221 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117673 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.118501 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.120027 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.124259 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.142045 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnq52\" (UniqueName: \"kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.176716 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4h8lc" event={"ID":"6c2a1a18-16ff-4419-b233-8649579edbea","Type":"ContainerStarted","Data":"ea56cb31969ede4dc77690e8380474b589122f4e8ba458f2575d15b6351054fb"} Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.178757 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerStarted","Data":"c97bf01c6b949d39e9bc8fa902a0c1cf304eedee9dbe4194b2055c35de3ec4ce"} Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.181262 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59d66dd7b7-rjtfk" event={"ID":"3b110ddf-5eea-4e32-b9f3-f07886d636a2","Type":"ContainerStarted","Data":"edb6a5e3eecc88a8d2bbfb0fdbece87ea6a4b28d555c22d32a7db25bc8e06e84"} Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.183145 5008 generic.go:334] "Generic (PLEG): container finished" podID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerID="3fec96d0d9b6bf3046f7029a3dc91f246cf551ca6e017f8896e18866aed96699" exitCode=0 Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.183261 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" event={"ID":"771d4fdc-7731-4bfc-a65a-7c3b8624eb32","Type":"ContainerDied","Data":"3fec96d0d9b6bf3046f7029a3dc91f246cf551ca6e017f8896e18866aed96699"} Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.193078 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-l96nk" event={"ID":"32d4f252-93b9-4d91-9501-7fac414b7b47","Type":"ContainerDied","Data":"1acb032ed25ef12c73d855be7174e50b33e647d61af0aafcd05a6e8ee53ae527"} Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.193158 5008 scope.go:117] "RemoveContainer" containerID="162e7c392841dddbcd1aa2020766cf167422ce4a22d288e65690e63fcf74ed9c" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.193381 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.206746 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4h8lc" podStartSLOduration=3.206717958 podStartE2EDuration="3.206717958s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:48:12.195800173 +0000 UTC m=+1235.868654420" watchObservedRunningTime="2026-01-29 15:48:12.206717958 +0000 UTC m=+1235.879572195" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.289766 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.292841 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.339388 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.904803 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.210368 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65975bb757-q7xqt" event={"ID":"5f86a518-6363-4796-a4f4-7208aacccc99","Type":"ContainerStarted","Data":"115aa46cd8290b427be260b9a17520dfb8392c574f35bae7cb7c624f65477597"} Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.214853 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" event={"ID":"771d4fdc-7731-4bfc-a65a-7c3b8624eb32","Type":"ContainerStarted","Data":"7c2adc3a463437940f2209966bd51450818f3254391e12503b2d25eac2fb47ae"} Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.214954 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.237008 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" podStartSLOduration=4.236991291 podStartE2EDuration="4.236991291s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:48:13.231673432 +0000 UTC m=+1236.904527669" watchObservedRunningTime="2026-01-29 15:48:13.236991291 +0000 UTC m=+1236.909845518" Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.336518 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d4f252-93b9-4d91-9501-7fac414b7b47" path="/var/lib/kubelet/pods/32d4f252-93b9-4d91-9501-7fac414b7b47/volumes" Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.990896 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.990955 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.776363 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.811543 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.813275 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.815630 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.829348 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.845754 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.845830 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxxg\" (UniqueName: \"kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.845867 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.845931 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.845958 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.846032 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.846056 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.881927 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.912121 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bf5f5fc4b-t9vk7"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.933892 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948077 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948121 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948188 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948214 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948263 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948309 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxxxg\" (UniqueName: \"kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948352 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.950131 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.951270 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.955346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.958112 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.958966 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.971464 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.975056 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bf5f5fc4b-t9vk7"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.975140 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxxxg\" (UniqueName: \"kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.050435 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-scripts\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.050735 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-tls-certs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.050888 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc599e48-62d0-4908-b4ed-cd3f13094665-logs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.051023 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-secret-key\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.051118 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-combined-ca-bundle\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.051755 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jc45\" (UniqueName: \"kubernetes.io/projected/fc599e48-62d0-4908-b4ed-cd3f13094665-kube-api-access-6jc45\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.051930 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-config-data\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.134257 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.153843 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-secret-key\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.154224 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-combined-ca-bundle\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.154478 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jc45\" (UniqueName: \"kubernetes.io/projected/fc599e48-62d0-4908-b4ed-cd3f13094665-kube-api-access-6jc45\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.154684 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-config-data\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.154985 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-scripts\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.155209 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-tls-certs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.155409 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc599e48-62d0-4908-b4ed-cd3f13094665-logs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.156159 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc599e48-62d0-4908-b4ed-cd3f13094665-logs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.156209 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-config-data\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.157583 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-scripts\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.158776 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-tls-certs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.159110 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-secret-key\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.159577 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-combined-ca-bundle\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.170623 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jc45\" (UniqueName: \"kubernetes.io/projected/fc599e48-62d0-4908-b4ed-cd3f13094665-kube-api-access-6jc45\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.258421 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:20 crc kubenswrapper[5008]: I0129 15:48:20.170085 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:20 crc kubenswrapper[5008]: I0129 15:48:20.258729 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:48:20 crc kubenswrapper[5008]: I0129 15:48:20.259029 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" containerID="cri-o://8c955580cc84bdb7c729644dacf0097c59885b458cef63ff2bf7694209b8b51b" gracePeriod=10 Jan 29 15:48:21 crc kubenswrapper[5008]: I0129 15:48:21.292082 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerID="8c955580cc84bdb7c729644dacf0097c59885b458cef63ff2bf7694209b8b51b" exitCode=0 Jan 29 15:48:21 crc kubenswrapper[5008]: I0129 15:48:21.292150 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" event={"ID":"1d24d44a-1e0f-43ea-a065-9c4f369e0045","Type":"ContainerDied","Data":"8c955580cc84bdb7c729644dacf0097c59885b458cef63ff2bf7694209b8b51b"} Jan 29 15:48:23 crc kubenswrapper[5008]: I0129 15:48:23.457278 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 29 15:48:28 crc kubenswrapper[5008]: I0129 15:48:28.458299 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 29 15:48:33 crc kubenswrapper[5008]: I0129 15:48:33.465084 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 29 15:48:33 crc kubenswrapper[5008]: I0129 15:48:33.465952 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:48:35 crc kubenswrapper[5008]: E0129 15:48:35.976265 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 29 15:48:35 crc kubenswrapper[5008]: E0129 15:48:35.976970 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jftb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-rcl2z_openstack(4ec0e696-652d-463e-b97e-dad0065a543b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:48:35 crc kubenswrapper[5008]: E0129 15:48:35.978134 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-rcl2z" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" Jan 29 15:48:36 crc kubenswrapper[5008]: E0129 15:48:36.485528 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-rcl2z" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.457392 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.990758 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.990845 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.990896 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.991625 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.992458 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b" gracePeriod=600 Jan 29 15:48:46 crc kubenswrapper[5008]: I0129 15:48:46.574462 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b" exitCode=0 Jan 29 15:48:46 crc kubenswrapper[5008]: I0129 15:48:46.574550 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b"} Jan 29 15:48:46 crc kubenswrapper[5008]: I0129 15:48:46.575274 5008 scope.go:117] "RemoveContainer" containerID="f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a" Jan 29 15:48:48 crc kubenswrapper[5008]: I0129 15:48:48.459164 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:48:51 crc kubenswrapper[5008]: E0129 15:48:51.730378 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 15:48:51 crc kubenswrapper[5008]: E0129 15:48:51.730933 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n695h68fh68fh64bh65bhc8h5fhfch668hfdh64fh66bh8dh64h674hdbh697h544h57ch59ch554h595h575h664h64bh689h549h5bdh65h6h5c8h88q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jtf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-66f4589f77-j49wf_openstack(8e9e19dd-550a-467d-bd79-03ee07c2f470): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:48:51 crc kubenswrapper[5008]: E0129 15:48:51.733140 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-66f4589f77-j49wf" podUID="8e9e19dd-550a-467d-bd79-03ee07c2f470" Jan 29 15:48:53 crc kubenswrapper[5008]: I0129 15:48:53.460260 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:48:58 crc kubenswrapper[5008]: I0129 15:48:58.461858 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:03 crc kubenswrapper[5008]: I0129 15:49:03.462633 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:04 crc kubenswrapper[5008]: E0129 15:49:04.290997 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 15:49:04 crc kubenswrapper[5008]: E0129 15:49:04.291404 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cch654h59h55fh685h76hdh5d6hb8h5c9hbh645hfh54h695hcch67h696h5f6h5d8h5dbh585h5h576h644hc9h5f9h5cbh5b6h6h597h55bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfh8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-59d66dd7b7-rjtfk_openstack(3b110ddf-5eea-4e32-b9f3-f07886d636a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:49:04 crc kubenswrapper[5008]: E0129 15:49:04.295773 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-59d66dd7b7-rjtfk" podUID="3b110ddf-5eea-4e32-b9f3-f07886d636a2" Jan 29 15:49:08 crc kubenswrapper[5008]: I0129 15:49:08.463750 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:13 crc kubenswrapper[5008]: I0129 15:49:13.465632 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:18 crc kubenswrapper[5008]: I0129 15:49:18.467244 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:20 crc kubenswrapper[5008]: E0129 15:49:20.800306 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 15:49:20 crc kubenswrapper[5008]: E0129 15:49:20.800874 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbbh89h654h697h589h59fh65ch559h9bh676h5c5h55bhcbh555h55h698h66hc9h646h59bh5c9h557h659h669hf6h54h676h57h665h5dfhb4hd8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnq52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-65975bb757-q7xqt_openstack(5f86a518-6363-4796-a4f4-7208aacccc99): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:49:20 crc kubenswrapper[5008]: E0129 15:49:20.803639 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-65975bb757-q7xqt" podUID="5f86a518-6363-4796-a4f4-7208aacccc99" Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.917832 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.932350 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsqfw\" (UniqueName: \"kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.932582 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.932651 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.932723 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.933077 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.933174 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.942062 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw" (OuterVolumeSpecName: "kube-api-access-zsqfw") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "kube-api-access-zsqfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.964874 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" event={"ID":"1d24d44a-1e0f-43ea-a065-9c4f369e0045","Type":"ContainerDied","Data":"ce4f811545cec808190704383cf9c2a75b48fb0966a323612a8e888c6a8f70bd"} Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.964900 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.013794 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.018353 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.029370 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.040264 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.040289 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.040303 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsqfw\" (UniqueName: \"kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.040312 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.043909 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.061323 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config" (OuterVolumeSpecName: "config") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.142349 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.142385 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.237102 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.297561 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.303502 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.343012 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" path="/var/lib/kubelet/pods/1d24d44a-1e0f-43ea-a065-9c4f369e0045/volumes" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.465201 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.473321 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:49:21 crc kubenswrapper[5008]: E0129 15:49:21.507689 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 29 15:49:21 crc kubenswrapper[5008]: E0129 15:49:21.507932 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68fh666h94h85h96h57fh59fh588h5fdh647h66chbbh67ch6ch5dch68ch677h5d8h599h5fbh64ch5b7h68fhfbhbh58fh556h67dh5f6h5c8hc9h5b5q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngjqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8457b44a-814e-403f-a2c9-71907f5cb2d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.552941 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs\") pod \"8e9e19dd-550a-467d-bd79-03ee07c2f470\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.552989 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jtf7\" (UniqueName: \"kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7\") pod \"8e9e19dd-550a-467d-bd79-03ee07c2f470\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553054 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key\") pod \"8e9e19dd-550a-467d-bd79-03ee07c2f470\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553100 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key\") pod \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553211 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts\") pod \"8e9e19dd-550a-467d-bd79-03ee07c2f470\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553252 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfh8n\" (UniqueName: \"kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n\") pod \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553285 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts\") pod \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553309 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs\") pod \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553361 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data\") pod \"8e9e19dd-550a-467d-bd79-03ee07c2f470\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553497 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data\") pod \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553545 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs" (OuterVolumeSpecName: "logs") pod "8e9e19dd-550a-467d-bd79-03ee07c2f470" (UID: "8e9e19dd-550a-467d-bd79-03ee07c2f470"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553971 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.555796 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data" (OuterVolumeSpecName: "config-data") pod "3b110ddf-5eea-4e32-b9f3-f07886d636a2" (UID: "3b110ddf-5eea-4e32-b9f3-f07886d636a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.556542 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs" (OuterVolumeSpecName: "logs") pod "3b110ddf-5eea-4e32-b9f3-f07886d636a2" (UID: "3b110ddf-5eea-4e32-b9f3-f07886d636a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.556696 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts" (OuterVolumeSpecName: "scripts") pod "8e9e19dd-550a-467d-bd79-03ee07c2f470" (UID: "8e9e19dd-550a-467d-bd79-03ee07c2f470"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.557267 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts" (OuterVolumeSpecName: "scripts") pod "3b110ddf-5eea-4e32-b9f3-f07886d636a2" (UID: "3b110ddf-5eea-4e32-b9f3-f07886d636a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.557464 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data" (OuterVolumeSpecName: "config-data") pod "8e9e19dd-550a-467d-bd79-03ee07c2f470" (UID: "8e9e19dd-550a-467d-bd79-03ee07c2f470"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.559318 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n" (OuterVolumeSpecName: "kube-api-access-zfh8n") pod "3b110ddf-5eea-4e32-b9f3-f07886d636a2" (UID: "3b110ddf-5eea-4e32-b9f3-f07886d636a2"). InnerVolumeSpecName "kube-api-access-zfh8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.567012 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7" (OuterVolumeSpecName: "kube-api-access-6jtf7") pod "8e9e19dd-550a-467d-bd79-03ee07c2f470" (UID: "8e9e19dd-550a-467d-bd79-03ee07c2f470"). InnerVolumeSpecName "kube-api-access-6jtf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.578679 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8e9e19dd-550a-467d-bd79-03ee07c2f470" (UID: "8e9e19dd-550a-467d-bd79-03ee07c2f470"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.581284 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3b110ddf-5eea-4e32-b9f3-f07886d636a2" (UID: "3b110ddf-5eea-4e32-b9f3-f07886d636a2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655453 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655487 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655502 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655515 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jtf7\" (UniqueName: \"kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655531 5008 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655542 5008 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655553 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655565 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfh8n\" (UniqueName: \"kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655576 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.976583 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59d66dd7b7-rjtfk" event={"ID":"3b110ddf-5eea-4e32-b9f3-f07886d636a2","Type":"ContainerDied","Data":"edb6a5e3eecc88a8d2bbfb0fdbece87ea6a4b28d555c22d32a7db25bc8e06e84"} Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.976655 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.981266 5008 generic.go:334] "Generic (PLEG): container finished" podID="f8408515-bbd2-46aa-b98f-a331b6659aa8" containerID="82015428914e1b8d83489174480b3a04643dbd25b377d65c00407eb4dfbc5a91" exitCode=0 Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.981407 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8gfd" event={"ID":"f8408515-bbd2-46aa-b98f-a331b6659aa8","Type":"ContainerDied","Data":"82015428914e1b8d83489174480b3a04643dbd25b377d65c00407eb4dfbc5a91"} Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.988160 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f4589f77-j49wf" event={"ID":"8e9e19dd-550a-467d-bd79-03ee07c2f470","Type":"ContainerDied","Data":"1409d01f2c501abf5116a293f455b4ede7359b5dd6f401ad59f4bc1ff5e27560"} Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.988221 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:49:22 crc kubenswrapper[5008]: I0129 15:49:22.075194 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:49:22 crc kubenswrapper[5008]: I0129 15:49:22.082487 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:49:22 crc kubenswrapper[5008]: I0129 15:49:22.099433 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:49:22 crc kubenswrapper[5008]: I0129 15:49:22.107671 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.345844 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b110ddf-5eea-4e32-b9f3-f07886d636a2" path="/var/lib/kubelet/pods/3b110ddf-5eea-4e32-b9f3-f07886d636a2/volumes" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.347442 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9e19dd-550a-467d-bd79-03ee07c2f470" path="/var/lib/kubelet/pods/8e9e19dd-550a-467d-bd79-03ee07c2f470/volumes" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.390311 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.390518 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jftb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-rcl2z_openstack(4ec0e696-652d-463e-b97e-dad0065a543b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.391694 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-rcl2z" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.468154 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.468649 5008 scope.go:117] "RemoveContainer" containerID="8c955580cc84bdb7c729644dacf0097c59885b458cef63ff2bf7694209b8b51b" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.497967 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.503848 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.518979 5008 scope.go:117] "RemoveContainer" containerID="083f5bd0f3b73b9e5442787b14d42aed7700b0e82373d83000e080c51c1d585e" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.546732 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.546893 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b5fh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-fwhd5_openstack(9069f34b-ed91-4ced-8b05-91b83dd02938): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.548126 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-fwhd5" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.591735 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.591880 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key\") pod \"5f86a518-6363-4796-a4f4-7208aacccc99\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.591946 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs\") pod \"5f86a518-6363-4796-a4f4-7208aacccc99\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592001 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592031 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btnzd\" (UniqueName: \"kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592077 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnq52\" (UniqueName: \"kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52\") pod \"5f86a518-6363-4796-a4f4-7208aacccc99\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592143 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592169 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts\") pod \"5f86a518-6363-4796-a4f4-7208aacccc99\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592202 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592225 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592247 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data\") pod \"5f86a518-6363-4796-a4f4-7208aacccc99\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592273 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs" (OuterVolumeSpecName: "logs") pod "5f86a518-6363-4796-a4f4-7208aacccc99" (UID: "5f86a518-6363-4796-a4f4-7208aacccc99"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592686 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.593310 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data" (OuterVolumeSpecName: "config-data") pod "5f86a518-6363-4796-a4f4-7208aacccc99" (UID: "5f86a518-6363-4796-a4f4-7208aacccc99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.597833 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.597910 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts" (OuterVolumeSpecName: "scripts") pod "5f86a518-6363-4796-a4f4-7208aacccc99" (UID: "5f86a518-6363-4796-a4f4-7208aacccc99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.598173 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts" (OuterVolumeSpecName: "scripts") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.598369 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5f86a518-6363-4796-a4f4-7208aacccc99" (UID: "5f86a518-6363-4796-a4f4-7208aacccc99"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.598417 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.601881 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52" (OuterVolumeSpecName: "kube-api-access-xnq52") pod "5f86a518-6363-4796-a4f4-7208aacccc99" (UID: "5f86a518-6363-4796-a4f4-7208aacccc99"). InnerVolumeSpecName "kube-api-access-xnq52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.604994 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd" (OuterVolumeSpecName: "kube-api-access-btnzd") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "kube-api-access-btnzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.627026 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data" (OuterVolumeSpecName: "config-data") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.627796 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.693926 5008 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.693957 5008 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.693969 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btnzd\" (UniqueName: \"kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.693982 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnq52\" (UniqueName: \"kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.693994 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.694006 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.694018 5008 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.694027 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.694037 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.694047 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.926286 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bf5f5fc4b-t9vk7"] Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.009123 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65975bb757-q7xqt" event={"ID":"5f86a518-6363-4796-a4f4-7208aacccc99","Type":"ContainerDied","Data":"115aa46cd8290b427be260b9a17520dfb8392c574f35bae7cb7c624f65477597"} Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.009219 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.010798 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerStarted","Data":"dac0f8e5f596bebb7822b413588359e7076b890b5ffed6cda246c2680781b018"} Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.014975 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c"} Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.018711 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8gfd" event={"ID":"f8408515-bbd2-46aa-b98f-a331b6659aa8","Type":"ContainerDied","Data":"405bad21fefa05b3e90ec899e50725ce7823c20297242fc88f79da9c15e44ffd"} Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.018750 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="405bad21fefa05b3e90ec899e50725ce7823c20297242fc88f79da9c15e44ffd" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.018817 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.022933 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqc26" event={"ID":"c3a233d5-bf7f-4906-881c-5e81ea64e0e8","Type":"ContainerStarted","Data":"d1071455a85ae82bd88cb84ca9e9539c64ca11a3c5fff1412a478114adf32c80"} Jan 29 15:49:24 crc kubenswrapper[5008]: E0129 15:49:24.025254 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-fwhd5" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.083485 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tqc26" podStartSLOduration=5.144152025 podStartE2EDuration="1m15.08346618s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="2026-01-29 15:48:10.85183948 +0000 UTC m=+1234.524693717" lastFinishedPulling="2026-01-29 15:49:20.791153605 +0000 UTC m=+1304.464007872" observedRunningTime="2026-01-29 15:49:24.071860279 +0000 UTC m=+1307.744714536" watchObservedRunningTime="2026-01-29 15:49:24.08346618 +0000 UTC m=+1307.756320417" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.132015 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-b8gfd"] Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.149062 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-b8gfd"] Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.173036 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.188230 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:49:24 crc kubenswrapper[5008]: W0129 15:49:24.196818 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc599e48_62d0_4908_b4ed_cd3f13094665.slice/crio-3b8b028495714be6330f2e40152ee2298496c4252f560d1c6d186ee015deaff1 WatchSource:0}: Error finding container 3b8b028495714be6330f2e40152ee2298496c4252f560d1c6d186ee015deaff1: Status 404 returned error can't find the container with id 3b8b028495714be6330f2e40152ee2298496c4252f560d1c6d186ee015deaff1 Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.198213 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dkqkc"] Jan 29 15:49:24 crc kubenswrapper[5008]: E0129 15:49:24.198639 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8408515-bbd2-46aa-b98f-a331b6659aa8" containerName="keystone-bootstrap" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.198654 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8408515-bbd2-46aa-b98f-a331b6659aa8" containerName="keystone-bootstrap" Jan 29 15:49:24 crc kubenswrapper[5008]: E0129 15:49:24.198686 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.198714 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" Jan 29 15:49:24 crc kubenswrapper[5008]: E0129 15:49:24.198736 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="init" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.198744 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="init" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.199003 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.199022 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8408515-bbd2-46aa-b98f-a331b6659aa8" containerName="keystone-bootstrap" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.199538 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.204846 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.205120 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.205309 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.205546 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.205619 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sgcvh" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.210120 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dkqkc"] Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313375 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313497 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313548 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313591 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313692 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313773 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26nk\" (UniqueName: \"kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.414936 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.415014 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.415039 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.415067 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.415105 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.415172 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x26nk\" (UniqueName: \"kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.422670 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.423806 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.424342 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.434293 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26nk\" (UniqueName: \"kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.434356 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.434884 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.527046 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.792753 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dkqkc"] Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.034633 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkqkc" event={"ID":"39abc131-ba3e-4cd8-916a-520789627dd5","Type":"ContainerStarted","Data":"9b5824f48cc959e52e85d63863855d59e169e89e7ec31bd5ec6b371bffc34475"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.034944 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkqkc" event={"ID":"39abc131-ba3e-4cd8-916a-520789627dd5","Type":"ContainerStarted","Data":"49bd383a96da543cdc3197d5abfd843e95829c564775027bdeab41c6985acadd"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.040516 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerStarted","Data":"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.043534 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerStarted","Data":"864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.043576 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerStarted","Data":"c27f9304d6725c80976f2a7ffbaadb3b415bca1c1d26fe7cd46a2a94470354ae"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.046172 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf5f5fc4b-t9vk7" event={"ID":"fc599e48-62d0-4908-b4ed-cd3f13094665","Type":"ContainerStarted","Data":"5f5aecf8bd63fb893c6a35270d50e8046b10807028399e2cdfea7069233a8cd3"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.046218 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf5f5fc4b-t9vk7" event={"ID":"fc599e48-62d0-4908-b4ed-cd3f13094665","Type":"ContainerStarted","Data":"24754d131e8a0251ba19391e948a3c3a2c435f2c51496c5aec749d99571d090c"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.046233 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf5f5fc4b-t9vk7" event={"ID":"fc599e48-62d0-4908-b4ed-cd3f13094665","Type":"ContainerStarted","Data":"3b8b028495714be6330f2e40152ee2298496c4252f560d1c6d186ee015deaff1"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.060299 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dkqkc" podStartSLOduration=1.060281477 podStartE2EDuration="1.060281477s" podCreationTimestamp="2026-01-29 15:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:25.053616525 +0000 UTC m=+1308.726470792" watchObservedRunningTime="2026-01-29 15:49:25.060281477 +0000 UTC m=+1308.733135714" Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.069037 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7f49b8c48b-x77zl" podStartSLOduration=66.129417814 podStartE2EDuration="1m7.069017698s" podCreationTimestamp="2026-01-29 15:48:18 +0000 UTC" firstStartedPulling="2026-01-29 15:49:23.406660802 +0000 UTC m=+1307.079515069" lastFinishedPulling="2026-01-29 15:49:24.346260716 +0000 UTC m=+1308.019114953" observedRunningTime="2026-01-29 15:49:25.068394904 +0000 UTC m=+1308.741249151" watchObservedRunningTime="2026-01-29 15:49:25.069017698 +0000 UTC m=+1308.741871945" Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.092686 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-bf5f5fc4b-t9vk7" podStartSLOduration=66.708414261 podStartE2EDuration="1m7.092664192s" podCreationTimestamp="2026-01-29 15:48:18 +0000 UTC" firstStartedPulling="2026-01-29 15:49:24.198890701 +0000 UTC m=+1307.871744938" lastFinishedPulling="2026-01-29 15:49:24.583140622 +0000 UTC m=+1308.255994869" observedRunningTime="2026-01-29 15:49:25.087739622 +0000 UTC m=+1308.760593879" watchObservedRunningTime="2026-01-29 15:49:25.092664192 +0000 UTC m=+1308.765518429" Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.340017 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f86a518-6363-4796-a4f4-7208aacccc99" path="/var/lib/kubelet/pods/5f86a518-6363-4796-a4f4-7208aacccc99/volumes" Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.340729 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8408515-bbd2-46aa-b98f-a331b6659aa8" path="/var/lib/kubelet/pods/f8408515-bbd2-46aa-b98f-a331b6659aa8/volumes" Jan 29 15:49:27 crc kubenswrapper[5008]: I0129 15:49:27.064244 5008 generic.go:334] "Generic (PLEG): container finished" podID="c3a233d5-bf7f-4906-881c-5e81ea64e0e8" containerID="d1071455a85ae82bd88cb84ca9e9539c64ca11a3c5fff1412a478114adf32c80" exitCode=0 Jan 29 15:49:27 crc kubenswrapper[5008]: I0129 15:49:27.064895 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqc26" event={"ID":"c3a233d5-bf7f-4906-881c-5e81ea64e0e8","Type":"ContainerDied","Data":"d1071455a85ae82bd88cb84ca9e9539c64ca11a3c5fff1412a478114adf32c80"} Jan 29 15:49:29 crc kubenswrapper[5008]: I0129 15:49:29.136119 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:49:29 crc kubenswrapper[5008]: I0129 15:49:29.138205 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:49:29 crc kubenswrapper[5008]: I0129 15:49:29.258891 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:49:29 crc kubenswrapper[5008]: I0129 15:49:29.259227 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:49:32 crc kubenswrapper[5008]: I0129 15:49:32.119841 5008 generic.go:334] "Generic (PLEG): container finished" podID="39abc131-ba3e-4cd8-916a-520789627dd5" containerID="9b5824f48cc959e52e85d63863855d59e169e89e7ec31bd5ec6b371bffc34475" exitCode=0 Jan 29 15:49:32 crc kubenswrapper[5008]: I0129 15:49:32.120181 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkqkc" event={"ID":"39abc131-ba3e-4cd8-916a-520789627dd5","Type":"ContainerDied","Data":"9b5824f48cc959e52e85d63863855d59e169e89e7ec31bd5ec6b371bffc34475"} Jan 29 15:49:34 crc kubenswrapper[5008]: I0129 15:49:34.879684 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqc26" Jan 29 15:49:34 crc kubenswrapper[5008]: I0129 15:49:34.919314 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.010985 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data\") pod \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011245 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011300 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts\") pod \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011424 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62fxx\" (UniqueName: \"kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx\") pod \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011484 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011510 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x26nk\" (UniqueName: \"kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011545 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011574 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle\") pod \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011652 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs\") pod \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011684 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011712 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.012771 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs" (OuterVolumeSpecName: "logs") pod "c3a233d5-bf7f-4906-881c-5e81ea64e0e8" (UID: "c3a233d5-bf7f-4906-881c-5e81ea64e0e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.016279 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts" (OuterVolumeSpecName: "scripts") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.016748 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.017582 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx" (OuterVolumeSpecName: "kube-api-access-62fxx") pod "c3a233d5-bf7f-4906-881c-5e81ea64e0e8" (UID: "c3a233d5-bf7f-4906-881c-5e81ea64e0e8"). InnerVolumeSpecName "kube-api-access-62fxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.020913 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk" (OuterVolumeSpecName: "kube-api-access-x26nk") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "kube-api-access-x26nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.021430 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.023401 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts" (OuterVolumeSpecName: "scripts") pod "c3a233d5-bf7f-4906-881c-5e81ea64e0e8" (UID: "c3a233d5-bf7f-4906-881c-5e81ea64e0e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.044490 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3a233d5-bf7f-4906-881c-5e81ea64e0e8" (UID: "c3a233d5-bf7f-4906-881c-5e81ea64e0e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.051020 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.058636 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data" (OuterVolumeSpecName: "config-data") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.059133 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data" (OuterVolumeSpecName: "config-data") pod "c3a233d5-bf7f-4906-881c-5e81ea64e0e8" (UID: "c3a233d5-bf7f-4906-881c-5e81ea64e0e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113415 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113453 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113463 5008 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113472 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113480 5008 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113488 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113497 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62fxx\" (UniqueName: \"kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113507 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113514 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x26nk\" (UniqueName: \"kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113522 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113529 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.155356 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqc26" event={"ID":"c3a233d5-bf7f-4906-881c-5e81ea64e0e8","Type":"ContainerDied","Data":"7463a1c0c912427b5643e45ef8f082d31f897a9969a145430140c8f0d851f2fa"} Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.155392 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7463a1c0c912427b5643e45ef8f082d31f897a9969a145430140c8f0d851f2fa" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.155637 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqc26" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.157236 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkqkc" event={"ID":"39abc131-ba3e-4cd8-916a-520789627dd5","Type":"ContainerDied","Data":"49bd383a96da543cdc3197d5abfd843e95829c564775027bdeab41c6985acadd"} Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.157268 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49bd383a96da543cdc3197d5abfd843e95829c564775027bdeab41c6985acadd" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.157317 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.158624 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerStarted","Data":"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24"} Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.000480 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:49:36 crc kubenswrapper[5008]: E0129 15:49:36.001333 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39abc131-ba3e-4cd8-916a-520789627dd5" containerName="keystone-bootstrap" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.001356 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="39abc131-ba3e-4cd8-916a-520789627dd5" containerName="keystone-bootstrap" Jan 29 15:49:36 crc kubenswrapper[5008]: E0129 15:49:36.001410 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a233d5-bf7f-4906-881c-5e81ea64e0e8" containerName="placement-db-sync" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.001422 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a233d5-bf7f-4906-881c-5e81ea64e0e8" containerName="placement-db-sync" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.001703 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a233d5-bf7f-4906-881c-5e81ea64e0e8" containerName="placement-db-sync" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.001754 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="39abc131-ba3e-4cd8-916a-520789627dd5" containerName="keystone-bootstrap" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.002965 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.008503 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.008940 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.009456 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.009442 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.009865 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rlqfr" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.020231 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.080058 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-779d6696cc-ltp9g"] Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.081476 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.085375 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.086020 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.086125 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.086386 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.086423 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sgcvh" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.086508 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.098233 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-779d6696cc-ltp9g"] Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.129899 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130007 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130034 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130139 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130178 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130208 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130252 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbxw\" (UniqueName: \"kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232122 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232207 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232233 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232283 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-scripts\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232327 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-combined-ca-bundle\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232368 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-credential-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232403 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-fernet-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232427 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-config-data\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232466 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232491 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232523 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5d7\" (UniqueName: \"kubernetes.io/projected/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-kube-api-access-6c5d7\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232545 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-internal-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232574 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-public-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232596 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232617 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhbxw\" (UniqueName: \"kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.233492 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.240216 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.240389 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.240467 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.241592 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.243313 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.259466 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhbxw\" (UniqueName: \"kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.321143 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: E0129 15:49:36.331997 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-rcl2z" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334093 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-credential-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334148 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-fernet-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334177 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-config-data\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334228 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c5d7\" (UniqueName: \"kubernetes.io/projected/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-kube-api-access-6c5d7\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334252 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-internal-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334287 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-public-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334384 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-scripts\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334450 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-combined-ca-bundle\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.338958 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-credential-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.339003 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-config-data\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.339486 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-combined-ca-bundle\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.340021 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-public-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.341594 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-internal-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.342024 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-fernet-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.344068 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-scripts\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.359584 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c5d7\" (UniqueName: \"kubernetes.io/projected/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-kube-api-access-6c5d7\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.399548 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.829873 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.902669 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-779d6696cc-ltp9g"] Jan 29 15:49:36 crc kubenswrapper[5008]: W0129 15:49:36.919083 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4732d1d7_c3d2_4f17_bf74_d92f350a3e2b.slice/crio-46c0ae4c762a9ba85e2c5ed7bc28a3370050fbd43d8b6ce834bcc586e01359eb WatchSource:0}: Error finding container 46c0ae4c762a9ba85e2c5ed7bc28a3370050fbd43d8b6ce834bcc586e01359eb: Status 404 returned error can't find the container with id 46c0ae4c762a9ba85e2c5ed7bc28a3370050fbd43d8b6ce834bcc586e01359eb Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.176957 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-779d6696cc-ltp9g" event={"ID":"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b","Type":"ContainerStarted","Data":"90a7a8a001252d6d79a74dafc7f878323b0551fc1bacab57b7f43ca842970005"} Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.177010 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-779d6696cc-ltp9g" event={"ID":"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b","Type":"ContainerStarted","Data":"46c0ae4c762a9ba85e2c5ed7bc28a3370050fbd43d8b6ce834bcc586e01359eb"} Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.177466 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.180389 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerStarted","Data":"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd"} Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.180421 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerStarted","Data":"359a72657c9bfba53abd214342c7a1e93d76aafd5e6beccbea5acec3bf995e32"} Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.203282 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-779d6696cc-ltp9g" podStartSLOduration=1.2032598939999999 podStartE2EDuration="1.203259894s" podCreationTimestamp="2026-01-29 15:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:37.197117886 +0000 UTC m=+1320.869972133" watchObservedRunningTime="2026-01-29 15:49:37.203259894 +0000 UTC m=+1320.876114131" Jan 29 15:49:38 crc kubenswrapper[5008]: I0129 15:49:38.191193 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerStarted","Data":"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a"} Jan 29 15:49:38 crc kubenswrapper[5008]: I0129 15:49:38.191924 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:38 crc kubenswrapper[5008]: I0129 15:49:38.225581 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6445bd445b-mhznq" podStartSLOduration=3.225557095 podStartE2EDuration="3.225557095s" podCreationTimestamp="2026-01-29 15:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:38.219908448 +0000 UTC m=+1321.892762725" watchObservedRunningTime="2026-01-29 15:49:38.225557095 +0000 UTC m=+1321.898411362" Jan 29 15:49:39 crc kubenswrapper[5008]: I0129 15:49:39.136661 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 29 15:49:39 crc kubenswrapper[5008]: I0129 15:49:39.201292 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:39 crc kubenswrapper[5008]: I0129 15:49:39.260894 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-bf5f5fc4b-t9vk7" podUID="fc599e48-62d0-4908-b4ed-cd3f13094665" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 15:49:40 crc kubenswrapper[5008]: I0129 15:49:40.211276 5008 generic.go:334] "Generic (PLEG): container finished" podID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" containerID="bde50669bd65351b30c48ee0e65fb0911aba9f1d7624eae95461658432ebf883" exitCode=0 Jan 29 15:49:40 crc kubenswrapper[5008]: I0129 15:49:40.211476 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n7wgw" event={"ID":"8277eb2b-44f8-4fd9-af92-1832e0272e0e","Type":"ContainerDied","Data":"bde50669bd65351b30c48ee0e65fb0911aba9f1d7624eae95461658432ebf883"} Jan 29 15:49:40 crc kubenswrapper[5008]: I0129 15:49:40.214250 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fwhd5" event={"ID":"9069f34b-ed91-4ced-8b05-91b83dd02938","Type":"ContainerStarted","Data":"4235463096f31772a59e698a0a90916f6b2c055027357bae8128e733c3b9757d"} Jan 29 15:49:40 crc kubenswrapper[5008]: I0129 15:49:40.252933 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-fwhd5" podStartSLOduration=2.886856579 podStartE2EDuration="1m31.252913705s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="2026-01-29 15:48:10.749747664 +0000 UTC m=+1234.422601901" lastFinishedPulling="2026-01-29 15:49:39.11580477 +0000 UTC m=+1322.788659027" observedRunningTime="2026-01-29 15:49:40.248167159 +0000 UTC m=+1323.921021396" watchObservedRunningTime="2026-01-29 15:49:40.252913705 +0000 UTC m=+1323.925767962" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.183906 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n7wgw" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.233439 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n7wgw" event={"ID":"8277eb2b-44f8-4fd9-af92-1832e0272e0e","Type":"ContainerDied","Data":"b1174780d2fa3fe7c06477c9d106ea7940e8a6e121cc29c7f9f91c93470ca373"} Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.233681 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1174780d2fa3fe7c06477c9d106ea7940e8a6e121cc29c7f9f91c93470ca373" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.233848 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n7wgw" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.340877 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m6lk\" (UniqueName: \"kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk\") pod \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.341393 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data\") pod \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.341555 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle\") pod \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.341595 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data\") pod \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.347614 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk" (OuterVolumeSpecName: "kube-api-access-9m6lk") pod "8277eb2b-44f8-4fd9-af92-1832e0272e0e" (UID: "8277eb2b-44f8-4fd9-af92-1832e0272e0e"). InnerVolumeSpecName "kube-api-access-9m6lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.352165 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8277eb2b-44f8-4fd9-af92-1832e0272e0e" (UID: "8277eb2b-44f8-4fd9-af92-1832e0272e0e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.379581 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8277eb2b-44f8-4fd9-af92-1832e0272e0e" (UID: "8277eb2b-44f8-4fd9-af92-1832e0272e0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.403272 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data" (OuterVolumeSpecName: "config-data") pod "8277eb2b-44f8-4fd9-af92-1832e0272e0e" (UID: "8277eb2b-44f8-4fd9-af92-1832e0272e0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.443597 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m6lk\" (UniqueName: \"kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.444355 5008 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.444365 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.444374 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.731006 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:42 crc kubenswrapper[5008]: E0129 15:49:42.731353 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" containerName="glance-db-sync" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.731370 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" containerName="glance-db-sync" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.731557 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" containerName="glance-db-sync" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.732427 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.743504 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851185 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851286 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851317 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851364 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851387 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851410 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4tmk\" (UniqueName: \"kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952428 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952482 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952533 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952553 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952570 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4tmk\" (UniqueName: \"kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952603 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.953848 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.953854 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.953955 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.954003 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.954458 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.975829 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4tmk\" (UniqueName: \"kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.048840 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.538857 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.635435 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.637142 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.643416 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.643890 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.644286 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2qq6q" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.655044 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766383 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956zp\" (UniqueName: \"kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766438 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766539 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766577 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766598 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766646 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766868 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.868799 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.868871 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.868901 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.868950 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869014 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869076 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956zp\" (UniqueName: \"kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869099 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869656 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869747 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869885 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.873134 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.873669 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.877909 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.880067 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.880203 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.880807 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.887249 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.892712 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956zp\" (UniqueName: \"kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.907799 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971328 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971393 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971441 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971463 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gf7r\" (UniqueName: \"kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971506 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971547 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971565 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.002544 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.072939 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073009 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073099 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073128 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gf7r\" (UniqueName: \"kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073185 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073217 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073244 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073769 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.074739 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.074759 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.080836 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.088730 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.106024 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.109807 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gf7r\" (UniqueName: \"kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.117237 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.201532 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.277055 5008 generic.go:334] "Generic (PLEG): container finished" podID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerID="79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5" exitCode=0 Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.277100 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" event={"ID":"eeec0b0d-d386-486c-9bd7-2dfe88016cd8","Type":"ContainerDied","Data":"79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5"} Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.277126 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" event={"ID":"eeec0b0d-d386-486c-9bd7-2dfe88016cd8","Type":"ContainerStarted","Data":"ae0b8d6c25c2b8b74e6f25f289f7b9be41b0f6b931b8004d5a8d1e2aa3fcb1dc"} Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.558939 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:44 crc kubenswrapper[5008]: W0129 15:49:44.576738 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa21b57d_29c9_4b5d_8712_66e3d5762f26.slice/crio-cb8d466fac355bf7024eed81be62a68b67397ec6430e1ffe9e4072ffb3b4fd0b WatchSource:0}: Error finding container cb8d466fac355bf7024eed81be62a68b67397ec6430e1ffe9e4072ffb3b4fd0b: Status 404 returned error can't find the container with id cb8d466fac355bf7024eed81be62a68b67397ec6430e1ffe9e4072ffb3b4fd0b Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.800206 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:44 crc kubenswrapper[5008]: W0129 15:49:44.811205 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5858a5f6_5bd8_43b0_84bd_fc0cca454905.slice/crio-68397ad26b3459caddedb82fa3d7628ebbcabc96acf3985abf57176a47336435 WatchSource:0}: Error finding container 68397ad26b3459caddedb82fa3d7628ebbcabc96acf3985abf57176a47336435: Status 404 returned error can't find the container with id 68397ad26b3459caddedb82fa3d7628ebbcabc96acf3985abf57176a47336435 Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.297883 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.298747 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerStarted","Data":"68397ad26b3459caddedb82fa3d7628ebbcabc96acf3985abf57176a47336435"} Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.316975 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" event={"ID":"eeec0b0d-d386-486c-9bd7-2dfe88016cd8","Type":"ContainerStarted","Data":"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a"} Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.317197 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.321382 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerStarted","Data":"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5"} Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.321573 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerStarted","Data":"cb8d466fac355bf7024eed81be62a68b67397ec6430e1ffe9e4072ffb3b4fd0b"} Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.348573 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" podStartSLOduration=3.348551747 podStartE2EDuration="3.348551747s" podCreationTimestamp="2026-01-29 15:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:45.341014184 +0000 UTC m=+1329.013868441" watchObservedRunningTime="2026-01-29 15:49:45.348551747 +0000 UTC m=+1329.021406004" Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.379882 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.331776 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerStarted","Data":"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44"} Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.332333 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-log" containerID="cri-o://c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" gracePeriod=30 Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.332835 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-httpd" containerID="cri-o://15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" gracePeriod=30 Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.351921 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerStarted","Data":"0bfc91d2a4701b82935b807bf656dedc91ce5258bac2402d794853c482c9e6ba"} Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.351973 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerStarted","Data":"f8a7a58418b5d32fdb5298004fa279c77a6a8f01505d43cf209a60b4445f0b33"} Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.352192 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-log" containerID="cri-o://f8a7a58418b5d32fdb5298004fa279c77a6a8f01505d43cf209a60b4445f0b33" gracePeriod=30 Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.352212 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-httpd" containerID="cri-o://0bfc91d2a4701b82935b807bf656dedc91ce5258bac2402d794853c482c9e6ba" gracePeriod=30 Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.358730 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.358707231 podStartE2EDuration="4.358707231s" podCreationTimestamp="2026-01-29 15:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:46.35456488 +0000 UTC m=+1330.027419137" watchObservedRunningTime="2026-01-29 15:49:46.358707231 +0000 UTC m=+1330.031561468" Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.390800 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.390761299 podStartE2EDuration="4.390761299s" podCreationTimestamp="2026-01-29 15:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:46.382008527 +0000 UTC m=+1330.054862784" watchObservedRunningTime="2026-01-29 15:49:46.390761299 +0000 UTC m=+1330.063615556" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.017762 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.041850 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042061 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042106 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-956zp\" (UniqueName: \"kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042134 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042178 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042242 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042273 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.043397 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.044346 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs" (OuterVolumeSpecName: "logs") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.057420 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts" (OuterVolumeSpecName: "scripts") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.067087 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp" (OuterVolumeSpecName: "kube-api-access-956zp") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "kube-api-access-956zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.081708 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.082948 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.098802 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data" (OuterVolumeSpecName: "config-data") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144304 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144354 5008 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144364 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144373 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144382 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-956zp\" (UniqueName: \"kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144392 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144400 5008 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.163733 5008 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.245456 5008 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.363816 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerID="15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" exitCode=0 Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.363860 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerID="c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" exitCode=143 Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.363924 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerDied","Data":"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44"} Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.363961 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerDied","Data":"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5"} Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.363980 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerDied","Data":"cb8d466fac355bf7024eed81be62a68b67397ec6430e1ffe9e4072ffb3b4fd0b"} Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.364004 5008 scope.go:117] "RemoveContainer" containerID="15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.364181 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.377105 5008 generic.go:334] "Generic (PLEG): container finished" podID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerID="0bfc91d2a4701b82935b807bf656dedc91ce5258bac2402d794853c482c9e6ba" exitCode=0 Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.377371 5008 generic.go:334] "Generic (PLEG): container finished" podID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerID="f8a7a58418b5d32fdb5298004fa279c77a6a8f01505d43cf209a60b4445f0b33" exitCode=143 Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.377182 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerDied","Data":"0bfc91d2a4701b82935b807bf656dedc91ce5258bac2402d794853c482c9e6ba"} Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.377417 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerDied","Data":"f8a7a58418b5d32fdb5298004fa279c77a6a8f01505d43cf209a60b4445f0b33"} Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.413599 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.422753 5008 scope.go:117] "RemoveContainer" containerID="c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.435588 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.456913 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:47 crc kubenswrapper[5008]: E0129 15:49:47.457324 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-log" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.457340 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-log" Jan 29 15:49:47 crc kubenswrapper[5008]: E0129 15:49:47.457360 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-httpd" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.457367 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-httpd" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.457563 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-log" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.457631 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-httpd" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.462290 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.464032 5008 scope.go:117] "RemoveContainer" containerID="15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" Jan 29 15:49:47 crc kubenswrapper[5008]: E0129 15:49:47.466578 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44\": container with ID starting with 15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44 not found: ID does not exist" containerID="15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.466620 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44"} err="failed to get container status \"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44\": rpc error: code = NotFound desc = could not find container \"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44\": container with ID starting with 15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44 not found: ID does not exist" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.466678 5008 scope.go:117] "RemoveContainer" containerID="c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" Jan 29 15:49:47 crc kubenswrapper[5008]: E0129 15:49:47.467214 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5\": container with ID starting with c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5 not found: ID does not exist" containerID="c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.467260 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5"} err="failed to get container status \"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5\": rpc error: code = NotFound desc = could not find container \"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5\": container with ID starting with c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5 not found: ID does not exist" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.467288 5008 scope.go:117] "RemoveContainer" containerID="15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.467667 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.467885 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.470245 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44"} err="failed to get container status \"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44\": rpc error: code = NotFound desc = could not find container \"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44\": container with ID starting with 15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44 not found: ID does not exist" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.470275 5008 scope.go:117] "RemoveContainer" containerID="c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.470474 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.471112 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5"} err="failed to get container status \"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5\": rpc error: code = NotFound desc = could not find container \"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5\": container with ID starting with c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5 not found: ID does not exist" Jan 29 15:49:47 crc kubenswrapper[5008]: E0129 15:49:47.473210 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa21b57d_29c9_4b5d_8712_66e3d5762f26.slice\": RecentStats: unable to find data in memory cache]" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551232 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551358 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw62q\" (UniqueName: \"kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551453 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551486 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551594 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551716 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551741 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551825 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.653281 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655266 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw62q\" (UniqueName: \"kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655295 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655317 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655345 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655381 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655402 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655433 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.654113 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.656505 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.658047 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.660416 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.661372 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.667365 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.671597 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.678246 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw62q\" (UniqueName: \"kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.695370 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.784929 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.830498 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.858623 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.858681 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.858757 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.858903 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.858937 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.859014 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.859048 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gf7r\" (UniqueName: \"kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.860178 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.860672 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs" (OuterVolumeSpecName: "logs") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.865220 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.866003 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts" (OuterVolumeSpecName: "scripts") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.884248 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r" (OuterVolumeSpecName: "kube-api-access-6gf7r") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "kube-api-access-6gf7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.917004 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.925511 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data" (OuterVolumeSpecName: "config-data") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.970991 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971029 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gf7r\" (UniqueName: \"kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971048 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971060 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971073 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971106 5008 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971119 5008 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.995814 5008 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.072265 5008 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.328475 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.401209 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerStarted","Data":"7e694d90fa6a6ef1130c12d5f4ef32d5a6b46fd7321b4f1fabcb430d1ab3333d"} Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.403772 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerDied","Data":"68397ad26b3459caddedb82fa3d7628ebbcabc96acf3985abf57176a47336435"} Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.403870 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.404068 5008 scope.go:117] "RemoveContainer" containerID="0bfc91d2a4701b82935b807bf656dedc91ce5258bac2402d794853c482c9e6ba" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.450224 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.451150 5008 scope.go:117] "RemoveContainer" containerID="f8a7a58418b5d32fdb5298004fa279c77a6a8f01505d43cf209a60b4445f0b33" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.469159 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.478231 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:48 crc kubenswrapper[5008]: E0129 15:49:48.478592 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-httpd" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.478613 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-httpd" Jan 29 15:49:48 crc kubenswrapper[5008]: E0129 15:49:48.478632 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-log" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.478639 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-log" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.478814 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-httpd" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.478853 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-log" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.479875 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.483973 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.484157 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.506671 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581576 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581616 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581639 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581674 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581716 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581735 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzqpv\" (UniqueName: \"kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581762 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581826 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.683935 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684046 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684100 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684120 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684146 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684186 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684244 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684272 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzqpv\" (UniqueName: \"kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.685122 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.685284 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.685371 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.690568 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.691300 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.696352 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.700251 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.704258 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzqpv\" (UniqueName: \"kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.731639 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.834982 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.337248 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" path="/var/lib/kubelet/pods/5858a5f6-5bd8-43b0-84bd-fc0cca454905/volumes" Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.341549 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" path="/var/lib/kubelet/pods/fa21b57d-29c9-4b5d-8712-66e3d5762f26/volumes" Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.430889 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerStarted","Data":"e0fa9f1865b5505ccd4891898d3b56eec542add6175364fd360ee56950f55bac"} Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.437692 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.444820 5008 generic.go:334] "Generic (PLEG): container finished" podID="9069f34b-ed91-4ced-8b05-91b83dd02938" containerID="4235463096f31772a59e698a0a90916f6b2c055027357bae8128e733c3b9757d" exitCode=0 Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.444861 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fwhd5" event={"ID":"9069f34b-ed91-4ced-8b05-91b83dd02938","Type":"ContainerDied","Data":"4235463096f31772a59e698a0a90916f6b2c055027357bae8128e733c3b9757d"} Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.460391 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerStarted","Data":"dd3b252c8faadfc964f08468ca0dd6531af9e9a227235dd0778b9ecd9c6cebce"} Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.460843 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerStarted","Data":"d5ff4add692e0bdecfe0d236bfcf204bfe9c6a37130e4e5f390ced855d6ac026"} Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.462796 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerStarted","Data":"c487f572a202948b8d78e72676270d3b2c63fcc77e90c053860ecb9f63566609"} Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.466306 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rcl2z" event={"ID":"4ec0e696-652d-463e-b97e-dad0065a543b","Type":"ContainerStarted","Data":"0d834ba968e6d63e097a6aef362d3f06eb5d6b998580ed84a27255f328fc86b5"} Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.497369 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.497347347 podStartE2EDuration="3.497347347s" podCreationTimestamp="2026-01-29 15:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:50.485219274 +0000 UTC m=+1334.158073531" watchObservedRunningTime="2026-01-29 15:49:50.497347347 +0000 UTC m=+1334.170201584" Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.513900 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rcl2z" podStartSLOduration=2.559826155 podStartE2EDuration="1m41.513884789s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="2026-01-29 15:48:10.857545749 +0000 UTC m=+1234.530399986" lastFinishedPulling="2026-01-29 15:49:49.811604383 +0000 UTC m=+1333.484458620" observedRunningTime="2026-01-29 15:49:50.50775867 +0000 UTC m=+1334.180612927" watchObservedRunningTime="2026-01-29 15:49:50.513884789 +0000 UTC m=+1334.186739026" Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.858532 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.024941 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025052 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025174 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025224 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025273 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b5fh\" (UniqueName: \"kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025336 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025358 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025764 5008 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.030487 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.031605 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh" (OuterVolumeSpecName: "kube-api-access-6b5fh") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "kube-api-access-6b5fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.032434 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts" (OuterVolumeSpecName: "scripts") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.063638 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.079736 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data" (OuterVolumeSpecName: "config-data") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.120336 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.127122 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.127182 5008 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.127220 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.127242 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b5fh\" (UniqueName: \"kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.127263 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.475580 5008 generic.go:334] "Generic (PLEG): container finished" podID="6c2a1a18-16ff-4419-b233-8649579edbea" containerID="ea56cb31969ede4dc77690e8380474b589122f4e8ba458f2575d15b6351054fb" exitCode=0 Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.475646 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4h8lc" event={"ID":"6c2a1a18-16ff-4419-b233-8649579edbea","Type":"ContainerDied","Data":"ea56cb31969ede4dc77690e8380474b589122f4e8ba458f2575d15b6351054fb"} Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.481192 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerStarted","Data":"545a1369d45b715a3fe719964ed37da74cd517e9b86ae7060e6fa55a82e6ac61"} Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.484502 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.485022 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fwhd5" event={"ID":"9069f34b-ed91-4ced-8b05-91b83dd02938","Type":"ContainerDied","Data":"87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6"} Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.485050 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.527974 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.527952908 podStartE2EDuration="3.527952908s" podCreationTimestamp="2026-01-29 15:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:51.51772539 +0000 UTC m=+1335.190579667" watchObservedRunningTime="2026-01-29 15:49:51.527952908 +0000 UTC m=+1335.200807145" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.529075 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.717352 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:49:51 crc kubenswrapper[5008]: E0129 15:49:51.717689 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" containerName="cinder-db-sync" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.717705 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" containerName="cinder-db-sync" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.717916 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" containerName="cinder-db-sync" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.719103 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.722763 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.723052 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.723196 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x6pwm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.731181 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.754826 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.792192 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.792809 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="dnsmasq-dns" containerID="cri-o://8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a" gracePeriod=10 Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.800965 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.823801 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.825111 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843519 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843588 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843663 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843694 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hzd8\" (UniqueName: \"kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843758 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843821 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.858694 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945655 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945705 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945730 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945750 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945809 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945830 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945852 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945869 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945890 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hzd8\" (UniqueName: \"kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945916 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdbwm\" (UniqueName: \"kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945951 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945991 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.952878 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.952962 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.961732 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.967470 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.973573 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.993600 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.994991 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.998183 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.998476 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hzd8\" (UniqueName: \"kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.002446 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.047910 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.047973 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdbwm\" (UniqueName: \"kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.048075 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.048103 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.048154 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.048172 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.049050 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.049569 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.050423 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.051009 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.051058 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.051196 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.069294 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdbwm\" (UniqueName: \"kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149633 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149697 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149760 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149811 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149864 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149897 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.150005 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg756\" (UniqueName: \"kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.156285 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252027 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252355 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252394 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252427 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252478 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg756\" (UniqueName: \"kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252573 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252604 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.255684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.256575 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.263499 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.264889 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.267696 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.271284 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.287919 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg756\" (UniqueName: \"kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.508355 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.552491 5008 generic.go:334] "Generic (PLEG): container finished" podID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerID="8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a" exitCode=0 Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.553387 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.553600 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" event={"ID":"eeec0b0d-d386-486c-9bd7-2dfe88016cd8","Type":"ContainerDied","Data":"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a"} Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.553691 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" event={"ID":"eeec0b0d-d386-486c-9bd7-2dfe88016cd8","Type":"ContainerDied","Data":"ae0b8d6c25c2b8b74e6f25f289f7b9be41b0f6b931b8004d5a8d1e2aa3fcb1dc"} Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.553714 5008 scope.go:117] "RemoveContainer" containerID="8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.578296 5008 scope.go:117] "RemoveContainer" containerID="79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.622037 5008 scope.go:117] "RemoveContainer" containerID="8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a" Jan 29 15:49:52 crc kubenswrapper[5008]: E0129 15:49:52.623960 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a\": container with ID starting with 8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a not found: ID does not exist" containerID="8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.624006 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a"} err="failed to get container status \"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a\": rpc error: code = NotFound desc = could not find container \"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a\": container with ID starting with 8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a not found: ID does not exist" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.624035 5008 scope.go:117] "RemoveContainer" containerID="79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5" Jan 29 15:49:52 crc kubenswrapper[5008]: E0129 15:49:52.624389 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5\": container with ID starting with 79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5 not found: ID does not exist" containerID="79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.624425 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5"} err="failed to get container status \"79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5\": rpc error: code = NotFound desc = could not find container \"79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5\": container with ID starting with 79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5 not found: ID does not exist" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663459 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663530 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663580 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663603 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4tmk\" (UniqueName: \"kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663634 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663749 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.677019 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk" (OuterVolumeSpecName: "kube-api-access-n4tmk") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "kube-api-access-n4tmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.687311 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.720799 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config" (OuterVolumeSpecName: "config") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.734313 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.742210 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.743344 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.748370 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769345 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769390 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4tmk\" (UniqueName: \"kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769405 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769417 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769428 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769440 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.917700 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.977156 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.073466 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config\") pod \"6c2a1a18-16ff-4419-b233-8649579edbea\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.073580 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmvz6\" (UniqueName: \"kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6\") pod \"6c2a1a18-16ff-4419-b233-8649579edbea\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.073606 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle\") pod \"6c2a1a18-16ff-4419-b233-8649579edbea\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.079131 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6" (OuterVolumeSpecName: "kube-api-access-hmvz6") pod "6c2a1a18-16ff-4419-b233-8649579edbea" (UID: "6c2a1a18-16ff-4419-b233-8649579edbea"). InnerVolumeSpecName "kube-api-access-hmvz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.103927 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config" (OuterVolumeSpecName: "config") pod "6c2a1a18-16ff-4419-b233-8649579edbea" (UID: "6c2a1a18-16ff-4419-b233-8649579edbea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.147856 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c2a1a18-16ff-4419-b233-8649579edbea" (UID: "6c2a1a18-16ff-4419-b233-8649579edbea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.176263 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.176297 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmvz6\" (UniqueName: \"kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.176309 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.205902 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.413563 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.582556 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerStarted","Data":"57c9901e381187fc7eb0fcdcbe0d130f0d9a3aa88a3658cef67338340e39620e"} Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.585923 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerStarted","Data":"6656a69f4cd648a8aa3695a0ddc7bc96445ac83b10c1e0933a0183bb3570fe1e"} Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.589515 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.595346 5008 generic.go:334] "Generic (PLEG): container finished" podID="13aa614a-9b27-4f4d-a135-a7ee67864df9" containerID="25cc2e560f073aac6e9502dd45888e6009db4de2cc6eecbdc6f87e9a1e6e7041" exitCode=0 Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.595422 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" event={"ID":"13aa614a-9b27-4f4d-a135-a7ee67864df9","Type":"ContainerDied","Data":"25cc2e560f073aac6e9502dd45888e6009db4de2cc6eecbdc6f87e9a1e6e7041"} Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.595448 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" event={"ID":"13aa614a-9b27-4f4d-a135-a7ee67864df9","Type":"ContainerStarted","Data":"41f24142aa6f79d88b5af0a20bc8f3202ba12b85b127cf5d8b45441b8876beaf"} Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.611469 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4h8lc" event={"ID":"6c2a1a18-16ff-4419-b233-8649579edbea","Type":"ContainerDied","Data":"07e336009f3d0d4bad7a27492f349aabeb9348d525d8a5111ca33499deca9afe"} Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.611506 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07e336009f3d0d4bad7a27492f349aabeb9348d525d8a5111ca33499deca9afe" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.611533 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.688571 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.719831 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.734031 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.757446 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:49:53 crc kubenswrapper[5008]: E0129 15:49:53.757861 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="dnsmasq-dns" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.757875 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="dnsmasq-dns" Jan 29 15:49:53 crc kubenswrapper[5008]: E0129 15:49:53.757911 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="init" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.757917 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="init" Jan 29 15:49:53 crc kubenswrapper[5008]: E0129 15:49:53.757926 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2a1a18-16ff-4419-b233-8649579edbea" containerName="neutron-db-sync" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.757932 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2a1a18-16ff-4419-b233-8649579edbea" containerName="neutron-db-sync" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.758120 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="dnsmasq-dns" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.758134 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2a1a18-16ff-4419-b233-8649579edbea" containerName="neutron-db-sync" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.759050 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.773391 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.790464 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.791857 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.797016 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.797272 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.797373 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qg4fq" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.797831 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.800572 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.894737 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhflq\" (UniqueName: \"kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.894842 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.894878 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.894903 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895028 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895121 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895335 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895375 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895402 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895478 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895591 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlzfw\" (UniqueName: \"kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.996980 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhflq\" (UniqueName: \"kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997036 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997055 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997081 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997098 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997124 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997248 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997267 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997286 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997306 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997340 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlzfw\" (UniqueName: \"kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.998488 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.999159 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.000853 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.001390 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.001594 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.004774 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.004923 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.005404 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.015108 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.018250 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhflq\" (UniqueName: \"kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.023506 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlzfw\" (UniqueName: \"kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.095407 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.110663 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.197257 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:49:54 crc kubenswrapper[5008]: E0129 15:49:54.208009 5008 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 29 15:49:54 crc kubenswrapper[5008]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/13aa614a-9b27-4f4d-a135-a7ee67864df9/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 29 15:49:54 crc kubenswrapper[5008]: > podSandboxID="41f24142aa6f79d88b5af0a20bc8f3202ba12b85b127cf5d8b45441b8876beaf" Jan 29 15:49:54 crc kubenswrapper[5008]: E0129 15:49:54.208180 5008 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 15:49:54 crc kubenswrapper[5008]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8bh66fh5d9h598h646h55dhb6h5bdh64h5c7h7bh5f6h559h55dh6hddh65bh644h55bh64bh669h5hcbhdbh564h5bfh67ch5d4h5fh657h5b7h675q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdbwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d6bd97c5-9t6nm_openstack(13aa614a-9b27-4f4d-a135-a7ee67864df9): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/13aa614a-9b27-4f4d-a135-a7ee67864df9/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 29 15:49:54 crc kubenswrapper[5008]: > logger="UnhandledError" Jan 29 15:49:54 crc kubenswrapper[5008]: E0129 15:49:54.210103 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/13aa614a-9b27-4f4d-a135-a7ee67864df9/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" podUID="13aa614a-9b27-4f4d-a135-a7ee67864df9" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.262933 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.347066 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.347296 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon-log" containerID="cri-o://c27f9304d6725c80976f2a7ffbaadb3b415bca1c1d26fe7cd46a2a94470354ae" gracePeriod=30 Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.347715 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" containerID="cri-o://864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd" gracePeriod=30 Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.626111 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerStarted","Data":"6fb9ad78b8cfc33e60172b80d4b4df57814803c7224a9357a1c3e296f8b0d427"} Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.839415 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:49:54 crc kubenswrapper[5008]: W0129 15:49:54.867666 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod198c1bb9_c544_4f02_9b28_983302b67f85.slice/crio-fe4d27a42fca0f64cafefb978a52eff74b34c4b2a357e4ac6b7f8c5c5f84788a WatchSource:0}: Error finding container fe4d27a42fca0f64cafefb978a52eff74b34c4b2a357e4ac6b7f8c5c5f84788a: Status 404 returned error can't find the container with id fe4d27a42fca0f64cafefb978a52eff74b34c4b2a357e4ac6b7f8c5c5f84788a Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.940342 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.945777 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.023925 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.023992 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.024108 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.024144 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.024216 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.024237 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdbwm\" (UniqueName: \"kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.052555 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm" (OuterVolumeSpecName: "kube-api-access-xdbwm") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "kube-api-access-xdbwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.126736 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdbwm\" (UniqueName: \"kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.169722 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.175218 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config" (OuterVolumeSpecName: "config") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.181054 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.209340 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.216725 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.230128 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.230254 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.230330 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.230411 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.230486 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.337459 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" path="/var/lib/kubelet/pods/eeec0b0d-d386-486c-9bd7-2dfe88016cd8/volumes" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.640315 5008 generic.go:334] "Generic (PLEG): container finished" podID="198c1bb9-c544-4f02-9b28-983302b67f85" containerID="5992353136cc63043471174685289b57a122a180a840f4ae96151af03ba57534" exitCode=0 Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.640479 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774db89647-tm89m" event={"ID":"198c1bb9-c544-4f02-9b28-983302b67f85","Type":"ContainerDied","Data":"5992353136cc63043471174685289b57a122a180a840f4ae96151af03ba57534"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.640826 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774db89647-tm89m" event={"ID":"198c1bb9-c544-4f02-9b28-983302b67f85","Type":"ContainerStarted","Data":"fe4d27a42fca0f64cafefb978a52eff74b34c4b2a357e4ac6b7f8c5c5f84788a"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.650127 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerStarted","Data":"bdd8b5ad2f9dd0f7075ba3ebd36ca61dffe898dd3c726e03f48336bce5f5eb32"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.650170 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerStarted","Data":"04b65eba50b91345633c6fc5a3520c31c3922a473da83be590641f8a8f92912a"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.652799 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerStarted","Data":"b75f2a4361779c7b8425fd94ecbf05c19e481194aa4b56d42b2abd6ec2919902"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.667292 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerStarted","Data":"c9201e193ff2e5a2b26c3ff616dcb3f4435c4982dabca37e22678acddcd52a0c"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.667430 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api-log" containerID="cri-o://6fb9ad78b8cfc33e60172b80d4b4df57814803c7224a9357a1c3e296f8b0d427" gracePeriod=30 Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.667499 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api" containerID="cri-o://c9201e193ff2e5a2b26c3ff616dcb3f4435c4982dabca37e22678acddcd52a0c" gracePeriod=30 Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.667512 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.680471 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" event={"ID":"13aa614a-9b27-4f4d-a135-a7ee67864df9","Type":"ContainerDied","Data":"41f24142aa6f79d88b5af0a20bc8f3202ba12b85b127cf5d8b45441b8876beaf"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.680525 5008 scope.go:117] "RemoveContainer" containerID="25cc2e560f073aac6e9502dd45888e6009db4de2cc6eecbdc6f87e9a1e6e7041" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.680697 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.702380 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.702363843 podStartE2EDuration="4.702363843s" podCreationTimestamp="2026-01-29 15:49:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:55.684638763 +0000 UTC m=+1339.357493000" watchObservedRunningTime="2026-01-29 15:49:55.702363843 +0000 UTC m=+1339.375218080" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.772954 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.784232 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.697242 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774db89647-tm89m" event={"ID":"198c1bb9-c544-4f02-9b28-983302b67f85","Type":"ContainerStarted","Data":"3b493622238ba247bd3a423fda4a6f572ff13e66c0b2cd863b93d7fa09956597"} Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.697719 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.701764 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerStarted","Data":"07ed4b32a695d898c860c162dfa7b0d1cb072e63d6b2dbb86d1f05987c9972fb"} Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.701858 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.706192 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerStarted","Data":"69665425f19a49b5cdcfb4255b47fbfaaa95a031ae37ae6f7818c9b5e08c3fc8"} Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.709051 5008 generic.go:334] "Generic (PLEG): container finished" podID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerID="c9201e193ff2e5a2b26c3ff616dcb3f4435c4982dabca37e22678acddcd52a0c" exitCode=0 Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.709077 5008 generic.go:334] "Generic (PLEG): container finished" podID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerID="6fb9ad78b8cfc33e60172b80d4b4df57814803c7224a9357a1c3e296f8b0d427" exitCode=143 Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.709125 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerDied","Data":"c9201e193ff2e5a2b26c3ff616dcb3f4435c4982dabca37e22678acddcd52a0c"} Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.709146 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerDied","Data":"6fb9ad78b8cfc33e60172b80d4b4df57814803c7224a9357a1c3e296f8b0d427"} Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.723191 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-774db89647-tm89m" podStartSLOduration=3.723167046 podStartE2EDuration="3.723167046s" podCreationTimestamp="2026-01-29 15:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:56.717321814 +0000 UTC m=+1340.390176051" watchObservedRunningTime="2026-01-29 15:49:56.723167046 +0000 UTC m=+1340.396021303" Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.754346 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74c948b66b-9krkd" podStartSLOduration=3.754324241 podStartE2EDuration="3.754324241s" podCreationTimestamp="2026-01-29 15:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:56.743603951 +0000 UTC m=+1340.416458188" watchObservedRunningTime="2026-01-29 15:49:56.754324241 +0000 UTC m=+1340.427178488" Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.770241 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.527820799 podStartE2EDuration="5.770222777s" podCreationTimestamp="2026-01-29 15:49:51 +0000 UTC" firstStartedPulling="2026-01-29 15:49:52.703754881 +0000 UTC m=+1336.376609118" lastFinishedPulling="2026-01-29 15:49:53.946156859 +0000 UTC m=+1337.619011096" observedRunningTime="2026-01-29 15:49:56.766140469 +0000 UTC m=+1340.438994716" watchObservedRunningTime="2026-01-29 15:49:56.770222777 +0000 UTC m=+1340.443077014" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.052644 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.336793 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13aa614a-9b27-4f4d-a135-a7ee67864df9" path="/var/lib/kubelet/pods/13aa614a-9b27-4f4d-a135-a7ee67864df9/volumes" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.720506 5008 generic.go:334] "Generic (PLEG): container finished" podID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerID="864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd" exitCode=0 Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.720916 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerDied","Data":"864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd"} Jan 29 15:49:57 crc kubenswrapper[5008]: E0129 15:49:57.733141 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c3bbcd6_6512_4439_b70d_f46dd6382cfe.slice/crio-conmon-864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.785709 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.785765 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.827906 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.841349 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.391994 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-98cff5df-8qpcl"] Jan 29 15:49:58 crc kubenswrapper[5008]: E0129 15:49:58.396839 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13aa614a-9b27-4f4d-a135-a7ee67864df9" containerName="init" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.396888 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="13aa614a-9b27-4f4d-a135-a7ee67864df9" containerName="init" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.397102 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="13aa614a-9b27-4f4d-a135-a7ee67864df9" containerName="init" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.398024 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.401374 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-98cff5df-8qpcl"] Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.401756 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.402099 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502307 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-internal-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502462 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkcsj\" (UniqueName: \"kubernetes.io/projected/6bf14a27-dc0a-430e-affa-a6a28e944947-kube-api-access-dkcsj\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502616 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-public-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502749 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-combined-ca-bundle\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502808 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-httpd-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502831 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-ovndb-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.503227 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604809 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604876 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-internal-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604901 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkcsj\" (UniqueName: \"kubernetes.io/projected/6bf14a27-dc0a-430e-affa-a6a28e944947-kube-api-access-dkcsj\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604938 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-public-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604968 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-combined-ca-bundle\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604984 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-httpd-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.605001 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-ovndb-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.612074 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-combined-ca-bundle\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.612894 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-httpd-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.612914 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-public-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.613388 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-internal-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.614553 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.628833 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-ovndb-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.629562 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkcsj\" (UniqueName: \"kubernetes.io/projected/6bf14a27-dc0a-430e-affa-a6a28e944947-kube-api-access-dkcsj\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.731185 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.731227 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.732856 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.837156 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.837812 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.877932 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.890170 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:59 crc kubenswrapper[5008]: I0129 15:49:59.135259 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 29 15:49:59 crc kubenswrapper[5008]: I0129 15:49:59.740186 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:59 crc kubenswrapper[5008]: I0129 15:49:59.740834 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:00 crc kubenswrapper[5008]: I0129 15:50:00.764139 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 15:50:00 crc kubenswrapper[5008]: I0129 15:50:00.764243 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:50:00 crc kubenswrapper[5008]: I0129 15:50:00.810092 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 15:50:01 crc kubenswrapper[5008]: I0129 15:50:01.654152 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:01 crc kubenswrapper[5008]: I0129 15:50:01.669868 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:01 crc kubenswrapper[5008]: I0129 15:50:01.946221 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.069857 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.069955 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.070040 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.070214 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.070282 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.070322 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg756\" (UniqueName: \"kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.070347 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.077902 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.078866 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts" (OuterVolumeSpecName: "scripts") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.079411 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.079492 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756" (OuterVolumeSpecName: "kube-api-access-sg756") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "kube-api-access-sg756". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.084920 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs" (OuterVolumeSpecName: "logs") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.113590 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.147892 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data" (OuterVolumeSpecName: "config-data") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.172980 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173011 5008 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173020 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173032 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg756\" (UniqueName: \"kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173043 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173051 5008 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173058 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.649919 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.692985 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.769054 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerDied","Data":"6656a69f4cd648a8aa3695a0ddc7bc96445ac83b10c1e0933a0183bb3570fe1e"} Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.769125 5008 scope.go:117] "RemoveContainer" containerID="c9201e193ff2e5a2b26c3ff616dcb3f4435c4982dabca37e22678acddcd52a0c" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.769205 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.769348 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="cinder-scheduler" containerID="cri-o://b75f2a4361779c7b8425fd94ecbf05c19e481194aa4b56d42b2abd6ec2919902" gracePeriod=30 Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.769722 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="probe" containerID="cri-o://69665425f19a49b5cdcfb4255b47fbfaaa95a031ae37ae6f7818c9b5e08c3fc8" gracePeriod=30 Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.806508 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.819648 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.837032 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:50:02 crc kubenswrapper[5008]: E0129 15:50:02.837379 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.837397 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api" Jan 29 15:50:02 crc kubenswrapper[5008]: E0129 15:50:02.837410 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api-log" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.837417 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api-log" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.838371 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api-log" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.838394 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.841406 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.844936 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.844976 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.845071 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.855042 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.888563 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.888638 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f60d298-c33b-44b3-a99c-a0e75a321a80-logs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.888663 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.891878 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.891958 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f60d298-c33b-44b3-a99c-a0e75a321a80-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.892004 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n725c\" (UniqueName: \"kubernetes.io/projected/2f60d298-c33b-44b3-a99c-a0e75a321a80-kube-api-access-n725c\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.892050 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.892219 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.892327 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-scripts\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994207 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994271 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-scripts\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994315 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994356 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f60d298-c33b-44b3-a99c-a0e75a321a80-logs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994377 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994410 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994430 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f60d298-c33b-44b3-a99c-a0e75a321a80-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994452 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n725c\" (UniqueName: \"kubernetes.io/projected/2f60d298-c33b-44b3-a99c-a0e75a321a80-kube-api-access-n725c\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994474 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994897 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f60d298-c33b-44b3-a99c-a0e75a321a80-logs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994958 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f60d298-c33b-44b3-a99c-a0e75a321a80-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.003633 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.003742 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.004555 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-scripts\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.006537 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.007590 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.014060 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.027066 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n725c\" (UniqueName: \"kubernetes.io/projected/2f60d298-c33b-44b3-a99c-a0e75a321a80-kube-api-access-n725c\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.164925 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.352505 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" path="/var/lib/kubelet/pods/0c2eec64-4eaa-4412-9ff7-dad5918c12c8/volumes" Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.098031 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.188087 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.188359 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="dnsmasq-dns" containerID="cri-o://7c2adc3a463437940f2209966bd51450818f3254391e12503b2d25eac2fb47ae" gracePeriod=10 Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.786192 5008 generic.go:334] "Generic (PLEG): container finished" podID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerID="7c2adc3a463437940f2209966bd51450818f3254391e12503b2d25eac2fb47ae" exitCode=0 Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.786352 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" event={"ID":"771d4fdc-7731-4bfc-a65a-7c3b8624eb32","Type":"ContainerDied","Data":"7c2adc3a463437940f2209966bd51450818f3254391e12503b2d25eac2fb47ae"} Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.789281 5008 generic.go:334] "Generic (PLEG): container finished" podID="d01ff2cd-2707-4765-a399-a68312196c22" containerID="69665425f19a49b5cdcfb4255b47fbfaaa95a031ae37ae6f7818c9b5e08c3fc8" exitCode=0 Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.789317 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerDied","Data":"69665425f19a49b5cdcfb4255b47fbfaaa95a031ae37ae6f7818c9b5e08c3fc8"} Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.169427 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.313343 5008 scope.go:117] "RemoveContainer" containerID="6fb9ad78b8cfc33e60172b80d4b4df57814803c7224a9357a1c3e296f8b0d427" Jan 29 15:50:05 crc kubenswrapper[5008]: E0129 15:50:05.565014 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:50:05 crc kubenswrapper[5008]: E0129 15:50:05.565503 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngjqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8457b44a-814e-403f-a2c9-71907f5cb2d2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:50:05 crc kubenswrapper[5008]: E0129 15:50:05.567364 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.802931 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="ceilometer-notification-agent" containerID="cri-o://c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234" gracePeriod=30 Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.803388 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="sg-core" containerID="cri-o://73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24" gracePeriod=30 Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.878148 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.907758 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.921995 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-98cff5df-8qpcl"] Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944061 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hqb9\" (UniqueName: \"kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944209 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944227 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944254 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944290 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944333 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.950657 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9" (OuterVolumeSpecName: "kube-api-access-2hqb9") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "kube-api-access-2hqb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.990158 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.000933 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.003284 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.004278 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config" (OuterVolumeSpecName: "config") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.006442 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046794 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046835 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hqb9\" (UniqueName: \"kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046851 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046865 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046877 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046888 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.812539 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-98cff5df-8qpcl" event={"ID":"6bf14a27-dc0a-430e-affa-a6a28e944947","Type":"ContainerStarted","Data":"22f0c736cbcd5ecc4ec0e4188555dfe5fe097fd13441242813876c102b643b46"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.813114 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-98cff5df-8qpcl" event={"ID":"6bf14a27-dc0a-430e-affa-a6a28e944947","Type":"ContainerStarted","Data":"be1205ad8348b2d00885a1eb712b0ab1840bafcf0d42108c655323a2168a5d8a"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.813128 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-98cff5df-8qpcl" event={"ID":"6bf14a27-dc0a-430e-affa-a6a28e944947","Type":"ContainerStarted","Data":"09bec9cc7b5bfce2561ed03382e1c39ba67dc010d7a69e00025756ae6c9863ae"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.813141 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.817654 5008 generic.go:334] "Generic (PLEG): container finished" podID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerID="73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24" exitCode=2 Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.817699 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerDied","Data":"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.818916 5008 generic.go:334] "Generic (PLEG): container finished" podID="4ec0e696-652d-463e-b97e-dad0065a543b" containerID="0d834ba968e6d63e097a6aef362d3f06eb5d6b998580ed84a27255f328fc86b5" exitCode=0 Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.818963 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rcl2z" event={"ID":"4ec0e696-652d-463e-b97e-dad0065a543b","Type":"ContainerDied","Data":"0d834ba968e6d63e097a6aef362d3f06eb5d6b998580ed84a27255f328fc86b5"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.820242 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" event={"ID":"771d4fdc-7731-4bfc-a65a-7c3b8624eb32","Type":"ContainerDied","Data":"0855c1b3124d74f066ce8585049d7c108a1ae142bfe48dd2fe48b76c9a87b4b0"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.820274 5008 scope.go:117] "RemoveContainer" containerID="7c2adc3a463437940f2209966bd51450818f3254391e12503b2d25eac2fb47ae" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.820383 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.832953 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-98cff5df-8qpcl" podStartSLOduration=8.832932782 podStartE2EDuration="8.832932782s" podCreationTimestamp="2026-01-29 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:06.831080857 +0000 UTC m=+1350.503935104" watchObservedRunningTime="2026-01-29 15:50:06.832932782 +0000 UTC m=+1350.505787019" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.846364 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f60d298-c33b-44b3-a99c-a0e75a321a80","Type":"ContainerStarted","Data":"cd25fc19c8d48481455c2dc0d01e51bd350a2779964eeedcdc3663db00a3354d"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.846419 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f60d298-c33b-44b3-a99c-a0e75a321a80","Type":"ContainerStarted","Data":"24ef3c55c65c899e90c3a3025fc0ec92178c9af98f4626b89ff82020024e8b95"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.907048 5008 scope.go:117] "RemoveContainer" containerID="3fec96d0d9b6bf3046f7029a3dc91f246cf551ca6e017f8896e18866aed96699" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.909162 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.915766 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.335854 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" path="/var/lib/kubelet/pods/771d4fdc-7731-4bfc-a65a-7c3b8624eb32/volumes" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.430134 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490016 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490136 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490220 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490263 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngjqg\" (UniqueName: \"kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490324 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490419 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490455 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490744 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490773 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.492162 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.492336 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.497410 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts" (OuterVolumeSpecName: "scripts") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.499543 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg" (OuterVolumeSpecName: "kube-api-access-ngjqg") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "kube-api-access-ngjqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.519872 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data" (OuterVolumeSpecName: "config-data") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.522994 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.528178 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.593493 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.593534 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.593543 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.593553 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.593565 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngjqg\" (UniqueName: \"kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.875734 5008 generic.go:334] "Generic (PLEG): container finished" podID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerID="c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234" exitCode=0 Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.875830 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerDied","Data":"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234"} Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.877084 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerDied","Data":"c97bf01c6b949d39e9bc8fa902a0c1cf304eedee9dbe4194b2055c35de3ec4ce"} Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.875887 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.877149 5008 scope.go:117] "RemoveContainer" containerID="73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.886837 5008 generic.go:334] "Generic (PLEG): container finished" podID="d01ff2cd-2707-4765-a399-a68312196c22" containerID="b75f2a4361779c7b8425fd94ecbf05c19e481194aa4b56d42b2abd6ec2919902" exitCode=0 Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.886879 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerDied","Data":"b75f2a4361779c7b8425fd94ecbf05c19e481194aa4b56d42b2abd6ec2919902"} Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.982385 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.005361 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.015571 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.016103 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="ceilometer-notification-agent" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016121 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="ceilometer-notification-agent" Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.016134 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="sg-core" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016141 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="sg-core" Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.016163 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="init" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016169 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="init" Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.016183 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="dnsmasq-dns" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016190 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="dnsmasq-dns" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016341 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="sg-core" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016360 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="dnsmasq-dns" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016374 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="ceilometer-notification-agent" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.018121 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.021455 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.022523 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.048649 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.051630 5008 scope.go:117] "RemoveContainer" containerID="c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.083217 5008 scope.go:117] "RemoveContainer" containerID="73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24" Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.083640 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24\": container with ID starting with 73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24 not found: ID does not exist" containerID="73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.083670 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24"} err="failed to get container status \"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24\": rpc error: code = NotFound desc = could not find container \"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24\": container with ID starting with 73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24 not found: ID does not exist" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.083690 5008 scope.go:117] "RemoveContainer" containerID="c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234" Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.084209 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234\": container with ID starting with c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234 not found: ID does not exist" containerID="c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.084230 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234"} err="failed to get container status \"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234\": rpc error: code = NotFound desc = could not find container \"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234\": container with ID starting with c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234 not found: ID does not exist" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.101754 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nrdl\" (UniqueName: \"kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.102068 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.102294 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.102357 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.103211 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.103295 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.103337 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205166 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nrdl\" (UniqueName: \"kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205484 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205539 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205591 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205665 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205752 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205841 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.207350 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.207729 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.211433 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.211583 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.212412 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.213632 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.223667 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nrdl\" (UniqueName: \"kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.328449 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.329027 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.345719 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.348179 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.359621 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.424997 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jftb\" (UniqueName: \"kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb\") pod \"4ec0e696-652d-463e-b97e-dad0065a543b\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.425665 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data\") pod \"4ec0e696-652d-463e-b97e-dad0065a543b\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.425818 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle\") pod \"4ec0e696-652d-463e-b97e-dad0065a543b\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.434738 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb" (OuterVolumeSpecName: "kube-api-access-5jftb") pod "4ec0e696-652d-463e-b97e-dad0065a543b" (UID: "4ec0e696-652d-463e-b97e-dad0065a543b"). InnerVolumeSpecName "kube-api-access-5jftb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.445160 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4ec0e696-652d-463e-b97e-dad0065a543b" (UID: "4ec0e696-652d-463e-b97e-dad0065a543b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.494342 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ec0e696-652d-463e-b97e-dad0065a543b" (UID: "4ec0e696-652d-463e-b97e-dad0065a543b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.528861 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jftb\" (UniqueName: \"kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.528894 5008 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.528903 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.617668 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55d9fbf66-r5kj8"] Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.618265 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" containerName="barbican-db-sync" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.618286 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" containerName="barbican-db-sync" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.618719 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" containerName="barbican-db-sync" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.620918 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.653495 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55d9fbf66-r5kj8"] Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.731952 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85024049-9e4b-4814-a617-cd17614f2a80-logs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732002 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-scripts\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732037 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-public-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732212 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-combined-ca-bundle\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732283 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkpzj\" (UniqueName: \"kubernetes.io/projected/85024049-9e4b-4814-a617-cd17614f2a80-kube-api-access-pkpzj\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732313 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-config-data\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732368 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-internal-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.781406 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.833419 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.833543 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.833632 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hzd8\" (UniqueName: \"kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.833695 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.833929 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834064 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834431 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-combined-ca-bundle\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834519 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkpzj\" (UniqueName: \"kubernetes.io/projected/85024049-9e4b-4814-a617-cd17614f2a80-kube-api-access-pkpzj\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834581 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-config-data\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834672 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-internal-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834744 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85024049-9e4b-4814-a617-cd17614f2a80-logs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834768 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-scripts\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834853 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-public-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.837454 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85024049-9e4b-4814-a617-cd17614f2a80-logs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.840315 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.845349 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-config-data\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.847346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-scripts\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.850962 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts" (OuterVolumeSpecName: "scripts") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.855584 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.855682 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8" (OuterVolumeSpecName: "kube-api-access-4hzd8") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "kube-api-access-4hzd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.856040 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-combined-ca-bundle\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.860473 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkpzj\" (UniqueName: \"kubernetes.io/projected/85024049-9e4b-4814-a617-cd17614f2a80-kube-api-access-pkpzj\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.861094 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-internal-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.877158 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-public-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.900509 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rcl2z" event={"ID":"4ec0e696-652d-463e-b97e-dad0065a543b","Type":"ContainerDied","Data":"748398d1ff4ce764be647594fea290f65e925f9a2636d8aeb85a205a07c6aff2"} Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.900528 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.900547 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="748398d1ff4ce764be647594fea290f65e925f9a2636d8aeb85a205a07c6aff2" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.903224 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.903206 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerDied","Data":"57c9901e381187fc7eb0fcdcbe0d130f0d9a3aa88a3658cef67338340e39620e"} Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.903359 5008 scope.go:117] "RemoveContainer" containerID="69665425f19a49b5cdcfb4255b47fbfaaa95a031ae37ae6f7818c9b5e08c3fc8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.908696 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f60d298-c33b-44b3-a99c-a0e75a321a80","Type":"ContainerStarted","Data":"70a42b9a83558cc59b8000dd44397e820d03c275dab9b9708e536893765263c3"} Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.909505 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.931896 5008 scope.go:117] "RemoveContainer" containerID="b75f2a4361779c7b8425fd94ecbf05c19e481194aa4b56d42b2abd6ec2919902" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.938996 5008 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.939021 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.939030 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hzd8\" (UniqueName: \"kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.939038 5008 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.954618 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.962371 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.008021 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data" (OuterVolumeSpecName: "config-data") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.044343 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.044369 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.063992 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.063970024 podStartE2EDuration="7.063970024s" podCreationTimestamp="2026-01-29 15:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:08.932137156 +0000 UTC m=+1352.604991403" watchObservedRunningTime="2026-01-29 15:50:09.063970024 +0000 UTC m=+1352.736824261" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.121322 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.125480 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.148219 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.155498 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-d5688bfcd-94rkm"] Jan 29 15:50:09 crc kubenswrapper[5008]: E0129 15:50:09.155906 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="probe" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.155923 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="probe" Jan 29 15:50:09 crc kubenswrapper[5008]: E0129 15:50:09.155952 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="cinder-scheduler" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.155958 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="cinder-scheduler" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.156102 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="cinder-scheduler" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.156141 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="probe" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.157098 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.159814 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wg4h5" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.160108 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.160255 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.237936 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d5688bfcd-94rkm"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.253727 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.253824 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7c65\" (UniqueName: \"kubernetes.io/projected/24c4cc25-9e50-4601-bac2-552e1aded799-kube-api-access-z7c65\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.253857 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24c4cc25-9e50-4601-bac2-552e1aded799-logs\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.253905 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data-custom\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.253958 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-combined-ca-bundle\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.255420 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5c46c758ff-5p4jl"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.270554 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.274320 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.276840 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c46c758ff-5p4jl"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.353654 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" path="/var/lib/kubelet/pods/8457b44a-814e-403f-a2c9-71907f5cb2d2/volumes" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.354234 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355745 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-combined-ca-bundle\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355826 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355869 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-combined-ca-bundle\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355891 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7c65\" (UniqueName: \"kubernetes.io/projected/24c4cc25-9e50-4601-bac2-552e1aded799-kube-api-access-z7c65\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355913 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pnmm\" (UniqueName: \"kubernetes.io/projected/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-kube-api-access-6pnmm\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355933 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-logs\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355952 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24c4cc25-9e50-4601-bac2-552e1aded799-logs\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355989 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data-custom\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.356015 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data-custom\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.356037 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.356717 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24c4cc25-9e50-4601-bac2-552e1aded799-logs\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.361958 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-combined-ca-bundle\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.363298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data-custom\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.383885 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.383993 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.388140 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.397846 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.399376 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.401367 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7c65\" (UniqueName: \"kubernetes.io/projected/24c4cc25-9e50-4601-bac2-552e1aded799-kube-api-access-z7c65\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.407980 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.417400 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.443393 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.452140 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458024 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hcq\" (UniqueName: \"kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458076 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-combined-ca-bundle\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458099 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458124 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458146 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pnmm\" (UniqueName: \"kubernetes.io/projected/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-kube-api-access-6pnmm\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458170 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458186 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-logs\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458201 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458228 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458289 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data-custom\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458334 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt8j9\" (UniqueName: \"kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458356 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458380 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458410 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458431 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458490 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.460093 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-logs\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.463761 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.463827 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.465311 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data-custom\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.466517 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.467992 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.471272 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.487505 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-combined-ca-bundle\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.489342 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pnmm\" (UniqueName: \"kubernetes.io/projected/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-kube-api-access-6pnmm\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.504741 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.559811 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt8j9\" (UniqueName: \"kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.559904 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.559953 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560010 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560035 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560065 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560120 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b876n\" (UniqueName: \"kubernetes.io/projected/2c4e7961-5802-47c7-becf-75dd01d6e7d1-kube-api-access-b876n\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560142 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c4e7961-5802-47c7-becf-75dd01d6e7d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560193 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560231 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560260 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hcq\" (UniqueName: \"kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560294 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560326 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560356 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560384 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560408 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560441 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.570515 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.570881 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.571577 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.572058 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.572279 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.575049 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.578064 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.583898 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.589511 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.594155 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hcq\" (UniqueName: \"kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.596563 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt8j9\" (UniqueName: \"kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.604252 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.626463 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55d9fbf66-r5kj8"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.676805 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.676932 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b876n\" (UniqueName: \"kubernetes.io/projected/2c4e7961-5802-47c7-becf-75dd01d6e7d1-kube-api-access-b876n\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.676958 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c4e7961-5802-47c7-becf-75dd01d6e7d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.676995 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.677108 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.677244 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.678222 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c4e7961-5802-47c7-becf-75dd01d6e7d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.685033 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.686854 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.686924 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.690307 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.694269 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b876n\" (UniqueName: \"kubernetes.io/projected/2c4e7961-5802-47c7-becf-75dd01d6e7d1-kube-api-access-b876n\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.856071 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.870236 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.897436 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.932421 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d9fbf66-r5kj8" event={"ID":"85024049-9e4b-4814-a617-cd17614f2a80","Type":"ContainerStarted","Data":"49577ead3c56cfa4fc8c4afa22ed35523d5fc6a9bd2fe14bedee0cd114ebd9c9"} Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.945029 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerStarted","Data":"ac8bcb14c02650f4628017163e965fe6e1e75f1116276a7166d11c7831388a13"} Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.031522 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d5688bfcd-94rkm"] Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.164937 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c46c758ff-5p4jl"] Jan 29 15:50:10 crc kubenswrapper[5008]: W0129 15:50:10.169138 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf77f54f0_02b9_4082_8a76_dc78a9b7d08c.slice/crio-98d5e6627de94bf06ae24942bd5c032d8084ced63adeda9b6ac87f943ae2c8d1 WatchSource:0}: Error finding container 98d5e6627de94bf06ae24942bd5c032d8084ced63adeda9b6ac87f943ae2c8d1: Status 404 returned error can't find the container with id 98d5e6627de94bf06ae24942bd5c032d8084ced63adeda9b6ac87f943ae2c8d1 Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.416405 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:10 crc kubenswrapper[5008]: W0129 15:50:10.418965 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod930b6c6f_40a8_476f_ad73_069c7f2ffeb8.slice/crio-32662f5b5d6c2d9d8f2c316606503b0bdf87ca2b613c9eca5e18d259a4b9490d WatchSource:0}: Error finding container 32662f5b5d6c2d9d8f2c316606503b0bdf87ca2b613c9eca5e18d259a4b9490d: Status 404 returned error can't find the container with id 32662f5b5d6c2d9d8f2c316606503b0bdf87ca2b613c9eca5e18d259a4b9490d Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.513333 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.592932 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.978930 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c46c758ff-5p4jl" event={"ID":"f77f54f0-02b9-4082-8a76-dc78a9b7d08c","Type":"ContainerStarted","Data":"98d5e6627de94bf06ae24942bd5c032d8084ced63adeda9b6ac87f943ae2c8d1"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.007881 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerStarted","Data":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.007928 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerStarted","Data":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.028924 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d9fbf66-r5kj8" event={"ID":"85024049-9e4b-4814-a617-cd17614f2a80","Type":"ContainerStarted","Data":"83c43918b15aee419aec8c4f6c3c4f54f869f9668c31e8758b308bc721697e71"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.028967 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d9fbf66-r5kj8" event={"ID":"85024049-9e4b-4814-a617-cd17614f2a80","Type":"ContainerStarted","Data":"c021a354391bbbe3f6a8013dd6a9be3fd3462137824bb5153db6eeeb65ccb07e"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.031085 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.031123 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.066374 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" event={"ID":"35979baf-dba0-453c-bafd-16985d082448","Type":"ContainerStarted","Data":"3e9db3acbe84cb18dcd650ffdeedfffc3c78951f208824646557062d45cea8c7"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.068916 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55d9fbf66-r5kj8" podStartSLOduration=3.06890256 podStartE2EDuration="3.06890256s" podCreationTimestamp="2026-01-29 15:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:11.063211582 +0000 UTC m=+1354.736065819" watchObservedRunningTime="2026-01-29 15:50:11.06890256 +0000 UTC m=+1354.741756797" Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.069015 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerStarted","Data":"d590c476f44393281718ccb2a8a3e0af02d26c225e5b0e107a503b8af26e4e78"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.069036 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerStarted","Data":"32662f5b5d6c2d9d8f2c316606503b0bdf87ca2b613c9eca5e18d259a4b9490d"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.069693 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" event={"ID":"24c4cc25-9e50-4601-bac2-552e1aded799","Type":"ContainerStarted","Data":"fa6af8a974cb497ec74206ad3e39eb89858800b219a9a36c75932238ec4997e5"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.076830 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c4e7961-5802-47c7-becf-75dd01d6e7d1","Type":"ContainerStarted","Data":"a630ba7450cf7ff5b68a80184b109d24ebcf6fbd8c1fb0273e1b87fb9c31dea3"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.340107 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d01ff2cd-2707-4765-a399-a68312196c22" path="/var/lib/kubelet/pods/d01ff2cd-2707-4765-a399-a68312196c22/volumes" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.043201 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f9c9f8766-4lf97"] Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.044909 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.049985 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.053480 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.066997 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f9c9f8766-4lf97"] Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.089967 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerStarted","Data":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.101574 5008 generic.go:334] "Generic (PLEG): container finished" podID="35979baf-dba0-453c-bafd-16985d082448" containerID="054e6e3ef42c95903f288b4bdf317b2b2caa13f9aeb23d4a04ff1cd84e828a41" exitCode=0 Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.101643 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" event={"ID":"35979baf-dba0-453c-bafd-16985d082448","Type":"ContainerDied","Data":"054e6e3ef42c95903f288b4bdf317b2b2caa13f9aeb23d4a04ff1cd84e828a41"} Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.107074 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerStarted","Data":"d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36"} Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.108019 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.108048 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.134732 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c4e7961-5802-47c7-becf-75dd01d6e7d1","Type":"ContainerStarted","Data":"f20b07be3b02c44f08ebde7ad6b772dc570c81268411b26108454494d2b2451c"} Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169016 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-logs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169132 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-public-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169462 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-combined-ca-bundle\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169499 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169535 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qgvf\" (UniqueName: \"kubernetes.io/projected/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-kube-api-access-7qgvf\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169609 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data-custom\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169663 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-internal-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.188138 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-788c485464-442t2" podStartSLOduration=3.18811723 podStartE2EDuration="3.18811723s" podCreationTimestamp="2026-01-29 15:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:12.153386197 +0000 UTC m=+1355.826240444" watchObservedRunningTime="2026-01-29 15:50:12.18811723 +0000 UTC m=+1355.860971477" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272054 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-combined-ca-bundle\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272105 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272127 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qgvf\" (UniqueName: \"kubernetes.io/projected/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-kube-api-access-7qgvf\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272151 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data-custom\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272176 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-internal-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272248 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-logs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.273301 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-logs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272705 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-public-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.278313 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data-custom\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.279606 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-internal-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.282062 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-public-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.292040 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.292745 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qgvf\" (UniqueName: \"kubernetes.io/projected/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-kube-api-access-7qgvf\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.301088 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-combined-ca-bundle\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.435901 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.149122 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c4e7961-5802-47c7-becf-75dd01d6e7d1","Type":"ContainerStarted","Data":"f2adc752d118aaa84aabfad36038ec09473521ded01c78fdfc0626baa53e4c0a"} Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.175622 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.175599915 podStartE2EDuration="4.175599915s" podCreationTimestamp="2026-01-29 15:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:13.166386002 +0000 UTC m=+1356.839240249" watchObservedRunningTime="2026-01-29 15:50:13.175599915 +0000 UTC m=+1356.848454152" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.418312 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.419794 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.419954 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.426926 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cnc9x" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.428100 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.428647 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.517751 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.517823 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.517913 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxmg\" (UniqueName: \"kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.518189 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.621247 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.622397 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.622443 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.622541 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrxmg\" (UniqueName: \"kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.624767 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.633654 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.651538 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrxmg\" (UniqueName: \"kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.657367 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.694723 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5c6fbdb57f-zvhpz"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.696661 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.703553 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.703755 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.703915 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.711215 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.712366 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.728867 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.784547 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c6fbdb57f-zvhpz"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.804534 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.805573 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.810607 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827121 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-run-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827160 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-combined-ca-bundle\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827197 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-internal-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827235 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hslk6\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-kube-api-access-hslk6\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827258 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-etc-swift\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827314 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-log-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827354 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-config-data\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827402 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-public-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.861501 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f9c9f8766-4lf97"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.930561 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-internal-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.931220 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cshvj\" (UniqueName: \"kubernetes.io/projected/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-kube-api-access-cshvj\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.931374 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hslk6\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-kube-api-access-hslk6\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.931472 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-etc-swift\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.931981 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932065 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932158 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-log-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932211 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-config-data\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932318 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932365 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-public-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932410 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-run-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932442 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-combined-ca-bundle\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932695 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-log-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.933316 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-run-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.937454 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-combined-ca-bundle\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.938242 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-internal-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.939225 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-etc-swift\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.947950 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-public-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.949296 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-config-data\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.962906 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hslk6\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-kube-api-access-hslk6\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.033952 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.034075 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cshvj\" (UniqueName: \"kubernetes.io/projected/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-kube-api-access-cshvj\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.034141 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.034197 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.035763 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.038716 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.046411 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.070474 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cshvj\" (UniqueName: \"kubernetes.io/projected/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-kube-api-access-cshvj\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: E0129 15:50:14.076111 5008 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 15:50:14 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_26e3e9ce-4ea8-4746-af4e-21d6f2c9be74_0(f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289" Netns:"/var/run/netns/cfb342f6-07c6-44bc-9d3f-a3d9dbcd1a06" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289;K8S_POD_UID=26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74]: expected pod UID "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" but got "3b26c725-8ee1-4144-baa0-a4a85bb7e1d2" from Kube API Jan 29 15:50:14 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:50:14 crc kubenswrapper[5008]: > Jan 29 15:50:14 crc kubenswrapper[5008]: E0129 15:50:14.076191 5008 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 15:50:14 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_26e3e9ce-4ea8-4746-af4e-21d6f2c9be74_0(f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289" Netns:"/var/run/netns/cfb342f6-07c6-44bc-9d3f-a3d9dbcd1a06" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289;K8S_POD_UID=26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74]: expected pod UID "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" but got "3b26c725-8ee1-4144-baa0-a4a85bb7e1d2" from Kube API Jan 29 15:50:14 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:50:14 crc kubenswrapper[5008]: > pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.108409 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.136523 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.176564 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" event={"ID":"24c4cc25-9e50-4601-bac2-552e1aded799","Type":"ContainerStarted","Data":"d4bb9c3bf450ab33644a2c34f16296cc32f39dc53c4ed3f5e8f37b10c024982d"} Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.182878 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f9c9f8766-4lf97" event={"ID":"ce981b8e-ff53-48ad-b44e-b150c0b1b80f","Type":"ContainerStarted","Data":"4e511f378927e33787b2a83a1f7b41cfa438148ffe7e2bba89ff6429ae3dda94"} Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.182912 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f9c9f8766-4lf97" event={"ID":"ce981b8e-ff53-48ad-b44e-b150c0b1b80f","Type":"ContainerStarted","Data":"705f9053f043424c55ed90da76ae1b122f1f646741a6cdbb600c0bf424142cc2"} Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.199055 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c46c758ff-5p4jl" event={"ID":"f77f54f0-02b9-4082-8a76-dc78a9b7d08c","Type":"ContainerStarted","Data":"188f7ccadd7fa4a7273e1f297c3797cdf32c0a0265d4db20de98f029d9d205dd"} Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.239119 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" event={"ID":"35979baf-dba0-453c-bafd-16985d082448","Type":"ContainerStarted","Data":"517994ddf8724b531c045e361104301810488aaea5740758e3935f990fbe3040"} Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.239188 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.239960 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.242364 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5c46c758ff-5p4jl" podStartSLOduration=2.242246584 podStartE2EDuration="5.242334772s" podCreationTimestamp="2026-01-29 15:50:09 +0000 UTC" firstStartedPulling="2026-01-29 15:50:10.172038443 +0000 UTC m=+1353.844892680" lastFinishedPulling="2026-01-29 15:50:13.172126631 +0000 UTC m=+1356.844980868" observedRunningTime="2026-01-29 15:50:14.237217878 +0000 UTC m=+1357.910072115" watchObservedRunningTime="2026-01-29 15:50:14.242334772 +0000 UTC m=+1357.915189019" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.292033 5008 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" podUID="3b26c725-8ee1-4144-baa0-a4a85bb7e1d2" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.292726 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" podStartSLOduration=5.292704084 podStartE2EDuration="5.292704084s" podCreationTimestamp="2026-01-29 15:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:14.279956235 +0000 UTC m=+1357.952810492" watchObservedRunningTime="2026-01-29 15:50:14.292704084 +0000 UTC m=+1357.965558321" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.563513 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.575376 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.658436 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle\") pod \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.658598 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrxmg\" (UniqueName: \"kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg\") pod \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.658648 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config\") pod \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.658675 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret\") pod \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.668588 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" (UID: "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.673057 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" (UID: "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.677750 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" (UID: "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.708747 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg" (OuterVolumeSpecName: "kube-api-access-qrxmg") pod "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" (UID: "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74"). InnerVolumeSpecName "kube-api-access-qrxmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.761488 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.761520 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrxmg\" (UniqueName: \"kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.761534 5008 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.761543 5008 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.808028 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.898902 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 15:50:15 crc kubenswrapper[5008]: W0129 15:50:15.053146 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64c08f63_12a2_4dfb_b96d_0a12e9725021.slice/crio-0179304a91aeaf706bcbe516bc83c3b2cef97f31f00801d669e6faf9f746a3b0 WatchSource:0}: Error finding container 0179304a91aeaf706bcbe516bc83c3b2cef97f31f00801d669e6faf9f746a3b0: Status 404 returned error can't find the container with id 0179304a91aeaf706bcbe516bc83c3b2cef97f31f00801d669e6faf9f746a3b0 Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.053264 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c6fbdb57f-zvhpz"] Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.251350 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f9c9f8766-4lf97" event={"ID":"ce981b8e-ff53-48ad-b44e-b150c0b1b80f","Type":"ContainerStarted","Data":"21ab3e5e9c098630f4e65ce2d9d27c6c6fb172b9c0728335e8e146f72a60d6a6"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.251619 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.251815 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.252924 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2","Type":"ContainerStarted","Data":"e11174b8f1bc4882d3aaac37c4f644a3449e0accefc630d8e4b56b876aefa9f7"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.255498 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c46c758ff-5p4jl" event={"ID":"f77f54f0-02b9-4082-8a76-dc78a9b7d08c","Type":"ContainerStarted","Data":"024a5adcb36407fd6632a358885d2bf858f9bbe76adf4c504c995d93e17ab4b9"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.259129 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerStarted","Data":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.259656 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.261411 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" event={"ID":"64c08f63-12a2-4dfb-b96d-0a12e9725021","Type":"ContainerStarted","Data":"0179304a91aeaf706bcbe516bc83c3b2cef97f31f00801d669e6faf9f746a3b0"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.263952 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" event={"ID":"24c4cc25-9e50-4601-bac2-552e1aded799","Type":"ContainerStarted","Data":"8e1bf41bb7757d4555c8defd7cda4fb736b4a3836e9261f60d5c0dae9d4b367d"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.264008 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.282198 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f9c9f8766-4lf97" podStartSLOduration=3.282175497 podStartE2EDuration="3.282175497s" podCreationTimestamp="2026-01-29 15:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:15.269716325 +0000 UTC m=+1358.942570562" watchObservedRunningTime="2026-01-29 15:50:15.282175497 +0000 UTC m=+1358.955029734" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.313315 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.51854739 podStartE2EDuration="8.313291662s" podCreationTimestamp="2026-01-29 15:50:07 +0000 UTC" firstStartedPulling="2026-01-29 15:50:09.125259471 +0000 UTC m=+1352.798113698" lastFinishedPulling="2026-01-29 15:50:13.920003733 +0000 UTC m=+1357.592857970" observedRunningTime="2026-01-29 15:50:15.30334024 +0000 UTC m=+1358.976194497" watchObservedRunningTime="2026-01-29 15:50:15.313291662 +0000 UTC m=+1358.986145899" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.327654 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" podStartSLOduration=3.188357436 podStartE2EDuration="6.327639719s" podCreationTimestamp="2026-01-29 15:50:09 +0000 UTC" firstStartedPulling="2026-01-29 15:50:10.054875412 +0000 UTC m=+1353.727729649" lastFinishedPulling="2026-01-29 15:50:13.194157695 +0000 UTC m=+1356.867011932" observedRunningTime="2026-01-29 15:50:15.322958956 +0000 UTC m=+1358.995813203" watchObservedRunningTime="2026-01-29 15:50:15.327639719 +0000 UTC m=+1359.000493956" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.328415 5008 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" podUID="3b26c725-8ee1-4144-baa0-a4a85bb7e1d2" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.339762 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" path="/var/lib/kubelet/pods/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74/volumes" Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.017698 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.283028 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" event={"ID":"64c08f63-12a2-4dfb-b96d-0a12e9725021","Type":"ContainerStarted","Data":"ec2fce16711c316062f9db7e235b90573123117bde5ab9b3c69b07d18ad9760c"} Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.283373 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" event={"ID":"64c08f63-12a2-4dfb-b96d-0a12e9725021","Type":"ContainerStarted","Data":"782f8005c6f1c16206119dca644f958e03a7a1c84c42e66652134cff73017c15"} Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.283924 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-central-agent" containerID="cri-o://29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" gracePeriod=30 Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.284013 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="sg-core" containerID="cri-o://0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" gracePeriod=30 Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.284044 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-notification-agent" containerID="cri-o://90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" gracePeriod=30 Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.284081 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="proxy-httpd" containerID="cri-o://87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" gracePeriod=30 Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.284164 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.284197 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.319601 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" podStartSLOduration=3.319577682 podStartE2EDuration="3.319577682s" podCreationTimestamp="2026-01-29 15:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:16.29847121 +0000 UTC m=+1359.971325447" watchObservedRunningTime="2026-01-29 15:50:16.319577682 +0000 UTC m=+1359.992431929" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.159750 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.209491 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.209541 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.209594 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nrdl\" (UniqueName: \"kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.209630 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210201 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210340 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210367 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210413 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210697 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210854 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210867 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.216489 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts" (OuterVolumeSpecName: "scripts") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.216505 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl" (OuterVolumeSpecName: "kube-api-access-7nrdl") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "kube-api-access-7nrdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.306966 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.317212 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.317246 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.317263 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nrdl\" (UniqueName: \"kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.351315 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.379650 5008 generic.go:334] "Generic (PLEG): container finished" podID="b98db574-9529-4d76-be4d-66b44b61a962" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" exitCode=0 Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.379684 5008 generic.go:334] "Generic (PLEG): container finished" podID="b98db574-9529-4d76-be4d-66b44b61a962" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" exitCode=2 Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.379693 5008 generic.go:334] "Generic (PLEG): container finished" podID="b98db574-9529-4d76-be4d-66b44b61a962" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" exitCode=0 Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.379700 5008 generic.go:334] "Generic (PLEG): container finished" podID="b98db574-9529-4d76-be4d-66b44b61a962" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" exitCode=0 Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.380708 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388309 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerDied","Data":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388360 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerDied","Data":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388375 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerDied","Data":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388386 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerDied","Data":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388398 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerDied","Data":"ac8bcb14c02650f4628017163e965fe6e1e75f1116276a7166d11c7831388a13"} Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388417 5008 scope.go:117] "RemoveContainer" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.430457 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data" (OuterVolumeSpecName: "config-data") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.430995 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.431021 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.524610 5008 scope.go:117] "RemoveContainer" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.551996 5008 scope.go:117] "RemoveContainer" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.591822 5008 scope.go:117] "RemoveContainer" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.612327 5008 scope.go:117] "RemoveContainer" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.612815 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": container with ID starting with 87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200 not found: ID does not exist" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.612862 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} err="failed to get container status \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": rpc error: code = NotFound desc = could not find container \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": container with ID starting with 87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.612892 5008 scope.go:117] "RemoveContainer" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.613279 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": container with ID starting with 0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069 not found: ID does not exist" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613312 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} err="failed to get container status \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": rpc error: code = NotFound desc = could not find container \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": container with ID starting with 0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613325 5008 scope.go:117] "RemoveContainer" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.613560 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": container with ID starting with 90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4 not found: ID does not exist" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613581 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} err="failed to get container status \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": rpc error: code = NotFound desc = could not find container \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": container with ID starting with 90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613593 5008 scope.go:117] "RemoveContainer" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.613766 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": container with ID starting with 29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32 not found: ID does not exist" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613796 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} err="failed to get container status \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": rpc error: code = NotFound desc = could not find container \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": container with ID starting with 29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613808 5008 scope.go:117] "RemoveContainer" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613990 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} err="failed to get container status \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": rpc error: code = NotFound desc = could not find container \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": container with ID starting with 87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614015 5008 scope.go:117] "RemoveContainer" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614207 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} err="failed to get container status \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": rpc error: code = NotFound desc = could not find container \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": container with ID starting with 0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614225 5008 scope.go:117] "RemoveContainer" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614393 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} err="failed to get container status \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": rpc error: code = NotFound desc = could not find container \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": container with ID starting with 90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614428 5008 scope.go:117] "RemoveContainer" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614607 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} err="failed to get container status \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": rpc error: code = NotFound desc = could not find container \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": container with ID starting with 29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614630 5008 scope.go:117] "RemoveContainer" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614833 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} err="failed to get container status \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": rpc error: code = NotFound desc = could not find container \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": container with ID starting with 87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614851 5008 scope.go:117] "RemoveContainer" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615036 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} err="failed to get container status \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": rpc error: code = NotFound desc = could not find container \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": container with ID starting with 0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615060 5008 scope.go:117] "RemoveContainer" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615232 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} err="failed to get container status \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": rpc error: code = NotFound desc = could not find container \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": container with ID starting with 90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615255 5008 scope.go:117] "RemoveContainer" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615416 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} err="failed to get container status \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": rpc error: code = NotFound desc = could not find container \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": container with ID starting with 29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615434 5008 scope.go:117] "RemoveContainer" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615607 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} err="failed to get container status \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": rpc error: code = NotFound desc = could not find container \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": container with ID starting with 87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615642 5008 scope.go:117] "RemoveContainer" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615827 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} err="failed to get container status \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": rpc error: code = NotFound desc = could not find container \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": container with ID starting with 0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615860 5008 scope.go:117] "RemoveContainer" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.616017 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} err="failed to get container status \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": rpc error: code = NotFound desc = could not find container \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": container with ID starting with 90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.616035 5008 scope.go:117] "RemoveContainer" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.620188 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} err="failed to get container status \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": rpc error: code = NotFound desc = could not find container \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": container with ID starting with 29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.721310 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.733404 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.744620 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.745211 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="sg-core" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745237 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="sg-core" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.745254 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-central-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745262 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-central-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.745273 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-notification-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745278 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-notification-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.745292 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="proxy-httpd" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745298 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="proxy-httpd" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745524 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="sg-core" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745545 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-central-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745563 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-notification-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745576 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="proxy-httpd" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.748123 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.750589 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.751005 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.755676 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839593 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839668 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839728 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839813 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839855 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56fh\" (UniqueName: \"kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839902 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839933 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941489 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941534 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941568 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941608 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941640 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m56fh\" (UniqueName: \"kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941656 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941678 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.942517 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.942537 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.949045 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.949106 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.950884 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.953864 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.968574 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m56fh\" (UniqueName: \"kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:18 crc kubenswrapper[5008]: I0129 15:50:18.068456 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:18 crc kubenswrapper[5008]: I0129 15:50:18.660578 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:18 crc kubenswrapper[5008]: W0129 15:50:18.669875 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2bd431d_b897_47c3_a9cd_0dc161e88e4b.slice/crio-7a51db6eb1e7e8ce07e43b1ef14d4eb0c28d9c277551db9458bdd280aa7a4d57 WatchSource:0}: Error finding container 7a51db6eb1e7e8ce07e43b1ef14d4eb0c28d9c277551db9458bdd280aa7a4d57: Status 404 returned error can't find the container with id 7a51db6eb1e7e8ce07e43b1ef14d4eb0c28d9c277551db9458bdd280aa7a4d57 Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.135193 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.135370 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.334205 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b98db574-9529-4d76-be4d-66b44b61a962" path="/var/lib/kubelet/pods/b98db574-9529-4d76-be4d-66b44b61a962/volumes" Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.416694 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerStarted","Data":"7a51db6eb1e7e8ce07e43b1ef14d4eb0c28d9c277551db9458bdd280aa7a4d57"} Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.858009 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.959316 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.959640 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-774db89647-tm89m" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="dnsmasq-dns" containerID="cri-o://3b493622238ba247bd3a423fda4a6f572ff13e66c0b2cd863b93d7fa09956597" gracePeriod=10 Jan 29 15:50:20 crc kubenswrapper[5008]: I0129 15:50:20.402196 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 15:50:20 crc kubenswrapper[5008]: I0129 15:50:20.438744 5008 generic.go:334] "Generic (PLEG): container finished" podID="198c1bb9-c544-4f02-9b28-983302b67f85" containerID="3b493622238ba247bd3a423fda4a6f572ff13e66c0b2cd863b93d7fa09956597" exitCode=0 Jan 29 15:50:20 crc kubenswrapper[5008]: I0129 15:50:20.438875 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774db89647-tm89m" event={"ID":"198c1bb9-c544-4f02-9b28-983302b67f85","Type":"ContainerDied","Data":"3b493622238ba247bd3a423fda4a6f572ff13e66c0b2cd863b93d7fa09956597"} Jan 29 15:50:20 crc kubenswrapper[5008]: I0129 15:50:20.447437 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerStarted","Data":"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351"} Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.056603 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156129 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156364 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156439 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156494 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156536 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlzfw\" (UniqueName: \"kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156572 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.188719 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw" (OuterVolumeSpecName: "kube-api-access-xlzfw") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "kube-api-access-xlzfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.235674 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.253113 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.259162 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.259196 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlzfw\" (UniqueName: \"kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.259206 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.281612 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.305578 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.309303 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config" (OuterVolumeSpecName: "config") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.360903 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.360942 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.360952 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.504754 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerStarted","Data":"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc"} Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.512557 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774db89647-tm89m" event={"ID":"198c1bb9-c544-4f02-9b28-983302b67f85","Type":"ContainerDied","Data":"fe4d27a42fca0f64cafefb978a52eff74b34c4b2a357e4ac6b7f8c5c5f84788a"} Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.512619 5008 scope.go:117] "RemoveContainer" containerID="3b493622238ba247bd3a423fda4a6f572ff13e66c0b2cd863b93d7fa09956597" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.512774 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.685399 5008 scope.go:117] "RemoveContainer" containerID="5992353136cc63043471174685289b57a122a180a840f4ae96151af03ba57534" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.686128 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.703213 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:50:22 crc kubenswrapper[5008]: I0129 15:50:22.235563 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:22 crc kubenswrapper[5008]: I0129 15:50:22.521250 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerStarted","Data":"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b"} Jan 29 15:50:22 crc kubenswrapper[5008]: I0129 15:50:22.787896 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:23 crc kubenswrapper[5008]: I0129 15:50:23.335199 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" path="/var/lib/kubelet/pods/198c1bb9-c544-4f02-9b28-983302b67f85/volumes" Jan 29 15:50:23 crc kubenswrapper[5008]: I0129 15:50:23.953222 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.113154 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.120422 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.124543 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.554512 5008 generic.go:334] "Generic (PLEG): container finished" podID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerID="c27f9304d6725c80976f2a7ffbaadb3b415bca1c1d26fe7cd46a2a94470354ae" exitCode=137 Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.555812 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerDied","Data":"c27f9304d6725c80976f2a7ffbaadb3b415bca1c1d26fe7cd46a2a94470354ae"} Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.882463 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.946618 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.946828 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api-log" containerID="cri-o://d590c476f44393281718ccb2a8a3e0af02d26c225e5b0e107a503b8af26e4e78" gracePeriod=30 Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.947196 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" containerID="cri-o://d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36" gracePeriod=30 Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.968202 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": EOF" Jan 29 15:50:25 crc kubenswrapper[5008]: I0129 15:50:25.584720 5008 generic.go:334] "Generic (PLEG): container finished" podID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerID="d590c476f44393281718ccb2a8a3e0af02d26c225e5b0e107a503b8af26e4e78" exitCode=143 Jan 29 15:50:25 crc kubenswrapper[5008]: I0129 15:50:25.584766 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerDied","Data":"d590c476f44393281718ccb2a8a3e0af02d26c225e5b0e107a503b8af26e4e78"} Jan 29 15:50:27 crc kubenswrapper[5008]: I0129 15:50:27.476974 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.388374 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:38762->10.217.0.167:9311: read: connection reset by peer" Jan 29 15:50:28 crc kubenswrapper[5008]: E0129 15:50:28.598475 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod930b6c6f_40a8_476f_ad73_069c7f2ffeb8.slice/crio-d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.645557 5008 generic.go:334] "Generic (PLEG): container finished" podID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerID="d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36" exitCode=0 Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.645633 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerDied","Data":"d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36"} Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.812035 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.881466 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.881939 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74c948b66b-9krkd" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-api" containerID="cri-o://bdd8b5ad2f9dd0f7075ba3ebd36ca61dffe898dd3c726e03f48336bce5f5eb32" gracePeriod=30 Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.882098 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74c948b66b-9krkd" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-httpd" containerID="cri-o://07ed4b32a695d898c860c162dfa7b0d1cb072e63d6b2dbb86d1f05987c9972fb" gracePeriod=30 Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.136226 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.658462 5008 generic.go:334] "Generic (PLEG): container finished" podID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerID="07ed4b32a695d898c860c162dfa7b0d1cb072e63d6b2dbb86d1f05987c9972fb" exitCode=0 Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.658508 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerDied","Data":"07ed4b32a695d898c860c162dfa7b0d1cb072e63d6b2dbb86d1f05987c9972fb"} Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.870664 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": dial tcp 10.217.0.167:9311: connect: connection refused" Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.871076 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.870693 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": dial tcp 10.217.0.167:9311: connect: connection refused" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.782768 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.839990 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871453 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871572 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871638 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871665 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871762 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871833 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxxxg\" (UniqueName: \"kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871897 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.879446 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs" (OuterVolumeSpecName: "logs") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.883810 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg" (OuterVolumeSpecName: "kube-api-access-vxxxg") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "kube-api-access-vxxxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.883875 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.900076 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data" (OuterVolumeSpecName: "config-data") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.904549 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.922060 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts" (OuterVolumeSpecName: "scripts") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.932808 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.973289 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom\") pod \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.973424 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data\") pod \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.973463 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle\") pod \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.973545 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs\") pod \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.973590 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt8j9\" (UniqueName: \"kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9\") pod \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974105 5008 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974132 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974146 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974157 5008 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974167 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974178 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxxxg\" (UniqueName: \"kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974190 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.983005 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "930b6c6f-40a8-476f-ad73-069c7f2ffeb8" (UID: "930b6c6f-40a8-476f-ad73-069c7f2ffeb8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.989366 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs" (OuterVolumeSpecName: "logs") pod "930b6c6f-40a8-476f-ad73-069c7f2ffeb8" (UID: "930b6c6f-40a8-476f-ad73-069c7f2ffeb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.990770 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9" (OuterVolumeSpecName: "kube-api-access-gt8j9") pod "930b6c6f-40a8-476f-ad73-069c7f2ffeb8" (UID: "930b6c6f-40a8-476f-ad73-069c7f2ffeb8"). InnerVolumeSpecName "kube-api-access-gt8j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.016928 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "930b6c6f-40a8-476f-ad73-069c7f2ffeb8" (UID: "930b6c6f-40a8-476f-ad73-069c7f2ffeb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.069724 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data" (OuterVolumeSpecName: "config-data") pod "930b6c6f-40a8-476f-ad73-069c7f2ffeb8" (UID: "930b6c6f-40a8-476f-ad73-069c7f2ffeb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.075868 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.075913 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.075927 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.075935 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt8j9\" (UniqueName: \"kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.075945 5008 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.689161 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerDied","Data":"dac0f8e5f596bebb7822b413588359e7076b890b5ffed6cda246c2680781b018"} Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.689172 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.689222 5008 scope.go:117] "RemoveContainer" containerID="864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.691192 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2","Type":"ContainerStarted","Data":"a65066fbb5d55199948471854794db9995525f198fcafd03654ba2cce2be6f2e"} Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.693584 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerDied","Data":"32662f5b5d6c2d9d8f2c316606503b0bdf87ca2b613c9eca5e18d259a4b9490d"} Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.693664 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696342 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerStarted","Data":"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415"} Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696511 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-central-agent" containerID="cri-o://2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351" gracePeriod=30 Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696615 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696650 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="sg-core" containerID="cri-o://2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b" gracePeriod=30 Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696684 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-notification-agent" containerID="cri-o://e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc" gracePeriod=30 Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696664 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="proxy-httpd" containerID="cri-o://b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415" gracePeriod=30 Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.711256 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.966000365 podStartE2EDuration="19.711232926s" podCreationTimestamp="2026-01-29 15:50:13 +0000 UTC" firstStartedPulling="2026-01-29 15:50:14.810032094 +0000 UTC m=+1358.482886331" lastFinishedPulling="2026-01-29 15:50:31.555264655 +0000 UTC m=+1375.228118892" observedRunningTime="2026-01-29 15:50:32.711041312 +0000 UTC m=+1376.383895549" watchObservedRunningTime="2026-01-29 15:50:32.711232926 +0000 UTC m=+1376.384087183" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.775308 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.963267787 podStartE2EDuration="15.77529053s" podCreationTimestamp="2026-01-29 15:50:17 +0000 UTC" firstStartedPulling="2026-01-29 15:50:18.674092305 +0000 UTC m=+1362.346946542" lastFinishedPulling="2026-01-29 15:50:31.486115048 +0000 UTC m=+1375.158969285" observedRunningTime="2026-01-29 15:50:32.755875419 +0000 UTC m=+1376.428729666" watchObservedRunningTime="2026-01-29 15:50:32.77529053 +0000 UTC m=+1376.448144767" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.814075 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.842033 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.842065 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.842076 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.892961 5008 scope.go:117] "RemoveContainer" containerID="c27f9304d6725c80976f2a7ffbaadb3b415bca1c1d26fe7cd46a2a94470354ae" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.957083 5008 scope.go:117] "RemoveContainer" containerID="d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.985769 5008 scope.go:117] "RemoveContainer" containerID="d590c476f44393281718ccb2a8a3e0af02d26c225e5b0e107a503b8af26e4e78" Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.335265 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" path="/var/lib/kubelet/pods/8c3bbcd6-6512-4439-b70d-f46dd6382cfe/volumes" Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.336061 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" path="/var/lib/kubelet/pods/930b6c6f-40a8-476f-ad73-069c7f2ffeb8/volumes" Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707294 5008 generic.go:334] "Generic (PLEG): container finished" podID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerID="b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415" exitCode=0 Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707318 5008 generic.go:334] "Generic (PLEG): container finished" podID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerID="2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b" exitCode=2 Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707325 5008 generic.go:334] "Generic (PLEG): container finished" podID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerID="2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351" exitCode=0 Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707355 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerDied","Data":"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415"} Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707375 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerDied","Data":"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b"} Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707385 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerDied","Data":"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351"} Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.249985 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.338963 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339014 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339171 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339249 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m56fh\" (UniqueName: \"kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339287 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339340 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339364 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.341440 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.342965 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.347911 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts" (OuterVolumeSpecName: "scripts") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.366377 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh" (OuterVolumeSpecName: "kube-api-access-m56fh") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "kube-api-access-m56fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.378943 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.442939 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.442972 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.442981 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.442990 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.443000 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m56fh\" (UniqueName: \"kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.521668 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.544179 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.602431 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data" (OuterVolumeSpecName: "config-data") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.646319 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.727317 5008 generic.go:334] "Generic (PLEG): container finished" podID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerID="e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc" exitCode=0 Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.727374 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerDied","Data":"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc"} Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.727400 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerDied","Data":"7a51db6eb1e7e8ce07e43b1ef14d4eb0c28d9c277551db9458bdd280aa7a4d57"} Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.727418 5008 scope.go:117] "RemoveContainer" containerID="b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.727502 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.734631 5008 generic.go:334] "Generic (PLEG): container finished" podID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerID="bdd8b5ad2f9dd0f7075ba3ebd36ca61dffe898dd3c726e03f48336bce5f5eb32" exitCode=0 Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.734680 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerDied","Data":"bdd8b5ad2f9dd0f7075ba3ebd36ca61dffe898dd3c726e03f48336bce5f5eb32"} Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.755549 5008 scope.go:117] "RemoveContainer" containerID="2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.775084 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.799260 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.804018 5008 scope.go:117] "RemoveContainer" containerID="e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810140 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810608 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="proxy-httpd" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810629 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="proxy-httpd" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810642 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-notification-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810658 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-notification-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810678 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="init" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810685 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="init" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810694 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api-log" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810700 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api-log" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810716 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810722 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810732 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="sg-core" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810751 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="sg-core" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810764 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon-log" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810770 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon-log" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810828 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-central-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810837 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-central-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810848 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="dnsmasq-dns" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810857 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="dnsmasq-dns" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810873 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810879 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811069 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="dnsmasq-dns" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811086 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-notification-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811096 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-central-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811107 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811120 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="sg-core" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811129 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon-log" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811136 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="proxy-httpd" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811146 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811154 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api-log" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.821575 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.825026 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.825191 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.825227 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.833771 5008 scope.go:117] "RemoveContainer" containerID="2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.947513 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.950775 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.950852 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.950872 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.950922 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.950980 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.951156 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rfpc\" (UniqueName: \"kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.951242 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.951654 5008 scope.go:117] "RemoveContainer" containerID="b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.952017 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415\": container with ID starting with b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415 not found: ID does not exist" containerID="b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952062 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415"} err="failed to get container status \"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415\": rpc error: code = NotFound desc = could not find container \"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415\": container with ID starting with b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415 not found: ID does not exist" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952097 5008 scope.go:117] "RemoveContainer" containerID="2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.952365 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b\": container with ID starting with 2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b not found: ID does not exist" containerID="2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952391 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b"} err="failed to get container status \"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b\": rpc error: code = NotFound desc = could not find container \"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b\": container with ID starting with 2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b not found: ID does not exist" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952408 5008 scope.go:117] "RemoveContainer" containerID="e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.952594 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc\": container with ID starting with e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc not found: ID does not exist" containerID="e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952618 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc"} err="failed to get container status \"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc\": rpc error: code = NotFound desc = could not find container \"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc\": container with ID starting with e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc not found: ID does not exist" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952634 5008 scope.go:117] "RemoveContainer" containerID="2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.952852 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351\": container with ID starting with 2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351 not found: ID does not exist" containerID="2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952881 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351"} err="failed to get container status \"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351\": rpc error: code = NotFound desc = could not find container \"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351\": container with ID starting with 2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351 not found: ID does not exist" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052131 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config\") pod \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052228 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs\") pod \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052294 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config\") pod \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052414 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhflq\" (UniqueName: \"kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq\") pod \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052463 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle\") pod \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052655 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052675 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052691 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052726 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052772 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rfpc\" (UniqueName: \"kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052813 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052884 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.053354 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.057669 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.058556 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.059939 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.060061 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" (UID: "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.061614 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.063410 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.064673 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq" (OuterVolumeSpecName: "kube-api-access-lhflq") pod "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" (UID: "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2"). InnerVolumeSpecName "kube-api-access-lhflq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.077034 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rfpc\" (UniqueName: \"kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.108191 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" (UID: "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.122851 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config" (OuterVolumeSpecName: "config") pod "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" (UID: "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.140330 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" (UID: "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.154533 5008 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.154563 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.154573 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhflq\" (UniqueName: \"kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.154584 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.154592 5008 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.243878 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: W0129 15:50:36.693686 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc81636ad_f799_43f6_8304_b2121e7bb427.slice/crio-0e23d38c1351d3b9d8ce539ce39bcaaeb12db97fb4d36c36c739e94b79c66551 WatchSource:0}: Error finding container 0e23d38c1351d3b9d8ce539ce39bcaaeb12db97fb4d36c36c739e94b79c66551: Status 404 returned error can't find the container with id 0e23d38c1351d3b9d8ce539ce39bcaaeb12db97fb4d36c36c739e94b79c66551 Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.694055 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.744279 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.744474 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerDied","Data":"04b65eba50b91345633c6fc5a3520c31c3922a473da83be590641f8a8f92912a"} Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.744629 5008 scope.go:117] "RemoveContainer" containerID="07ed4b32a695d898c860c162dfa7b0d1cb072e63d6b2dbb86d1f05987c9972fb" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.745501 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerStarted","Data":"0e23d38c1351d3b9d8ce539ce39bcaaeb12db97fb4d36c36c739e94b79c66551"} Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.781444 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.788463 5008 scope.go:117] "RemoveContainer" containerID="bdd8b5ad2f9dd0f7075ba3ebd36ca61dffe898dd3c726e03f48336bce5f5eb32" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.796674 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:50:37 crc kubenswrapper[5008]: I0129 15:50:37.337773 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" path="/var/lib/kubelet/pods/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2/volumes" Jan 29 15:50:37 crc kubenswrapper[5008]: I0129 15:50:37.342320 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" path="/var/lib/kubelet/pods/b2bd431d-b897-47c3-a9cd-0dc161e88e4b/volumes" Jan 29 15:50:37 crc kubenswrapper[5008]: I0129 15:50:37.757026 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerStarted","Data":"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4"} Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.775244 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerStarted","Data":"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce"} Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.889618 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-lmdpk"] Jan 29 15:50:38 crc kubenswrapper[5008]: E0129 15:50:38.890482 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-httpd" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.890546 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-httpd" Jan 29 15:50:38 crc kubenswrapper[5008]: E0129 15:50:38.890676 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-api" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.890727 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-api" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.890944 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-httpd" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.891008 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-api" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.891610 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.939918 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lmdpk"] Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.999943 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-9xnkt"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.000937 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.003206 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh4cb\" (UniqueName: \"kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.003242 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.021097 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9xnkt"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.104496 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdg6w\" (UniqueName: \"kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.104537 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.104593 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh4cb\" (UniqueName: \"kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.104625 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.105434 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.130444 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh4cb\" (UniqueName: \"kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.175462 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-stxgj"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.184933 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.193693 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e284-account-create-update-cz9rj"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.195470 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.197354 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.205848 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdg6w\" (UniqueName: \"kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.205881 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.206866 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.219023 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-stxgj"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.223091 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.226500 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e284-account-create-update-cz9rj"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.241705 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdg6w\" (UniqueName: \"kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.309744 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.310234 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5t4k\" (UniqueName: \"kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.310342 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.310359 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zz5q\" (UniqueName: \"kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.329058 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.412033 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.412078 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zz5q\" (UniqueName: \"kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.412138 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.412165 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5t4k\" (UniqueName: \"kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.415153 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.415939 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.439853 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-fe67-account-create-update-bk5t9"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.441172 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.448670 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.449953 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fe67-account-create-update-bk5t9"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.457617 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5t4k\" (UniqueName: \"kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.466589 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zz5q\" (UniqueName: \"kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.504688 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.515029 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.518836 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z7ch\" (UniqueName: \"kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.518979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.527086 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4e36-account-create-update-mthn6"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.529084 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.531196 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.538431 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4e36-account-create-update-mthn6"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.621165 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.621338 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.621379 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7rj\" (UniqueName: \"kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.621406 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z7ch\" (UniqueName: \"kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.622361 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.644246 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z7ch\" (UniqueName: \"kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.723914 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7rj\" (UniqueName: \"kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.724067 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.725179 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.744823 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7rj\" (UniqueName: \"kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.800818 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lmdpk"] Jan 29 15:50:39 crc kubenswrapper[5008]: W0129 15:50:39.802897 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f34f608_b2f8_452e_8f0d_ef600929c36e.slice/crio-9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee WatchSource:0}: Error finding container 9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee: Status 404 returned error can't find the container with id 9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.803632 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.820756 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerStarted","Data":"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d"} Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.854542 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.988477 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9xnkt"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.138602 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e284-account-create-update-cz9rj"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.247390 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-stxgj"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.502824 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fe67-account-create-update-bk5t9"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.510533 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4e36-account-create-update-mthn6"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.613937 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.621137 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.720015 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.720259 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6445bd445b-mhznq" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-log" containerID="cri-o://922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd" gracePeriod=30 Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.720659 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6445bd445b-mhznq" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-api" containerID="cri-o://eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a" gracePeriod=30 Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.837645 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-stxgj" event={"ID":"110f96e6-c230-44f3-9247-90283da8976c","Type":"ContainerStarted","Data":"54f43a8eeb4abb125a006167955d4625d7a73d504efd41b8523df427c164efa6"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.841704 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" event={"ID":"63f2899c-3ee5-4d2c-ae4f-487783fede07","Type":"ContainerStarted","Data":"9b12f47cdbb2b896c48220d1aac0e8e6b7220c6ea2c4ff4cf2b76a913ef44a53"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.842815 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e284-account-create-update-cz9rj" event={"ID":"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e","Type":"ContainerStarted","Data":"50a2b0760e4fa9cc3fb045d185bf9670bd499e7f4ef0f98235ea9f3653af510c"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.844221 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lmdpk" event={"ID":"7f34f608-b2f8-452e-8f0d-ef600929c36e","Type":"ContainerStarted","Data":"be81fff79545094faefca144ba3c4c81eebfa7419befdbb4509e7d36ea1420d2"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.844245 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lmdpk" event={"ID":"7f34f608-b2f8-452e-8f0d-ef600929c36e","Type":"ContainerStarted","Data":"9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.845022 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9xnkt" event={"ID":"d6a58042-fefd-43b8-b186-905dcfc7b1af","Type":"ContainerStarted","Data":"aaf43661078ac1a9bfb08bc59c79813429bb3816596b5d120e45991a198b87c8"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.847424 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" event={"ID":"804a6c8c-4d3d-4949-adad-bf28d059ac39","Type":"ContainerStarted","Data":"c441bacddbbf24594f7845afb68dc94be9cda37d3cadf2779f979bf27b1d5a46"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.255347 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.438735 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.438974 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-log" containerID="cri-o://e0fa9f1865b5505ccd4891898d3b56eec542add6175364fd360ee56950f55bac" gracePeriod=30 Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.439088 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-httpd" containerID="cri-o://c487f572a202948b8d78e72676270d3b2c63fcc77e90c053860ecb9f63566609" gracePeriod=30 Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.857171 5008 generic.go:334] "Generic (PLEG): container finished" podID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerID="e0fa9f1865b5505ccd4891898d3b56eec542add6175364fd360ee56950f55bac" exitCode=143 Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.857270 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerDied","Data":"e0fa9f1865b5505ccd4891898d3b56eec542add6175364fd360ee56950f55bac"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.859341 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" event={"ID":"63f2899c-3ee5-4d2c-ae4f-487783fede07","Type":"ContainerStarted","Data":"4e5d5fbe6f7326436f09c1eeb706af22dd1889f9d31180f26e9f3a4622f566e8"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.861168 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e284-account-create-update-cz9rj" event={"ID":"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e","Type":"ContainerStarted","Data":"415c274cf2a73d8ccd9cabf2d49c7d2a9afd104170d6b26b6bc768e4e9246896"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.864087 5008 generic.go:334] "Generic (PLEG): container finished" podID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerID="922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd" exitCode=143 Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.864130 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerDied","Data":"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.865573 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9xnkt" event={"ID":"d6a58042-fefd-43b8-b186-905dcfc7b1af","Type":"ContainerStarted","Data":"9c072e49faa0fcbf14fb26ba5be4f4038a4404627a5b1d14d06a8f9d4347e6b9"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.866896 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" event={"ID":"804a6c8c-4d3d-4949-adad-bf28d059ac39","Type":"ContainerStarted","Data":"169df0c3000d56c3aa28fc235cca6494757bead3f467fc3b72cab38160ba66e9"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.869627 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-stxgj" event={"ID":"110f96e6-c230-44f3-9247-90283da8976c","Type":"ContainerStarted","Data":"84562c9f10ffe2b7193c90030faf995da403e3f35ef68c087bff6d088be04ae5"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.888385 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" podStartSLOduration=2.888367486 podStartE2EDuration="2.888367486s" podCreationTimestamp="2026-01-29 15:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.876713774 +0000 UTC m=+1385.549568021" watchObservedRunningTime="2026-01-29 15:50:41.888367486 +0000 UTC m=+1385.561221733" Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.895870 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-stxgj" podStartSLOduration=2.895852468 podStartE2EDuration="2.895852468s" podCreationTimestamp="2026-01-29 15:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.891534984 +0000 UTC m=+1385.564389231" watchObservedRunningTime="2026-01-29 15:50:41.895852468 +0000 UTC m=+1385.568706715" Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.913723 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" podStartSLOduration=2.913702692 podStartE2EDuration="2.913702692s" podCreationTimestamp="2026-01-29 15:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.906566268 +0000 UTC m=+1385.579420515" watchObservedRunningTime="2026-01-29 15:50:41.913702692 +0000 UTC m=+1385.586556929" Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.936673 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-e284-account-create-update-cz9rj" podStartSLOduration=2.9366512780000003 podStartE2EDuration="2.936651278s" podCreationTimestamp="2026-01-29 15:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.931359829 +0000 UTC m=+1385.604214086" watchObservedRunningTime="2026-01-29 15:50:41.936651278 +0000 UTC m=+1385.609505525" Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.956828 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-lmdpk" podStartSLOduration=3.956802506 podStartE2EDuration="3.956802506s" podCreationTimestamp="2026-01-29 15:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.944610671 +0000 UTC m=+1385.617464908" watchObservedRunningTime="2026-01-29 15:50:41.956802506 +0000 UTC m=+1385.629656763" Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.964062 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-9xnkt" podStartSLOduration=3.964046263 podStartE2EDuration="3.964046263s" podCreationTimestamp="2026-01-29 15:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.957314899 +0000 UTC m=+1385.630169136" watchObservedRunningTime="2026-01-29 15:50:41.964046263 +0000 UTC m=+1385.636900490" Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.885989 5008 generic.go:334] "Generic (PLEG): container finished" podID="d6a58042-fefd-43b8-b186-905dcfc7b1af" containerID="9c072e49faa0fcbf14fb26ba5be4f4038a4404627a5b1d14d06a8f9d4347e6b9" exitCode=0 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.886482 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9xnkt" event={"ID":"d6a58042-fefd-43b8-b186-905dcfc7b1af","Type":"ContainerDied","Data":"9c072e49faa0fcbf14fb26ba5be4f4038a4404627a5b1d14d06a8f9d4347e6b9"} Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.889376 5008 generic.go:334] "Generic (PLEG): container finished" podID="110f96e6-c230-44f3-9247-90283da8976c" containerID="84562c9f10ffe2b7193c90030faf995da403e3f35ef68c087bff6d088be04ae5" exitCode=0 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.889489 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-stxgj" event={"ID":"110f96e6-c230-44f3-9247-90283da8976c","Type":"ContainerDied","Data":"84562c9f10ffe2b7193c90030faf995da403e3f35ef68c087bff6d088be04ae5"} Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.908413 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerStarted","Data":"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71"} Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.908705 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-central-agent" containerID="cri-o://5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4" gracePeriod=30 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.908858 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.908926 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="proxy-httpd" containerID="cri-o://1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71" gracePeriod=30 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.908998 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="sg-core" containerID="cri-o://6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d" gracePeriod=30 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.909056 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-notification-agent" containerID="cri-o://57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce" gracePeriod=30 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.921451 5008 generic.go:334] "Generic (PLEG): container finished" podID="7f34f608-b2f8-452e-8f0d-ef600929c36e" containerID="be81fff79545094faefca144ba3c4c81eebfa7419befdbb4509e7d36ea1420d2" exitCode=0 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.922547 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lmdpk" event={"ID":"7f34f608-b2f8-452e-8f0d-ef600929c36e","Type":"ContainerDied","Data":"be81fff79545094faefca144ba3c4c81eebfa7419befdbb4509e7d36ea1420d2"} Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.964888 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.336980572 podStartE2EDuration="7.964873839s" podCreationTimestamp="2026-01-29 15:50:35 +0000 UTC" firstStartedPulling="2026-01-29 15:50:36.696694891 +0000 UTC m=+1380.369549128" lastFinishedPulling="2026-01-29 15:50:42.324588158 +0000 UTC m=+1385.997442395" observedRunningTime="2026-01-29 15:50:42.959468988 +0000 UTC m=+1386.632323225" watchObservedRunningTime="2026-01-29 15:50:42.964873839 +0000 UTC m=+1386.637728076" Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.935775 5008 generic.go:334] "Generic (PLEG): container finished" podID="ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" containerID="415c274cf2a73d8ccd9cabf2d49c7d2a9afd104170d6b26b6bc768e4e9246896" exitCode=0 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.935818 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e284-account-create-update-cz9rj" event={"ID":"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e","Type":"ContainerDied","Data":"415c274cf2a73d8ccd9cabf2d49c7d2a9afd104170d6b26b6bc768e4e9246896"} Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.937693 5008 generic.go:334] "Generic (PLEG): container finished" podID="804a6c8c-4d3d-4949-adad-bf28d059ac39" containerID="169df0c3000d56c3aa28fc235cca6494757bead3f467fc3b72cab38160ba66e9" exitCode=0 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.937725 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" event={"ID":"804a6c8c-4d3d-4949-adad-bf28d059ac39","Type":"ContainerDied","Data":"169df0c3000d56c3aa28fc235cca6494757bead3f467fc3b72cab38160ba66e9"} Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.939414 5008 generic.go:334] "Generic (PLEG): container finished" podID="63f2899c-3ee5-4d2c-ae4f-487783fede07" containerID="4e5d5fbe6f7326436f09c1eeb706af22dd1889f9d31180f26e9f3a4622f566e8" exitCode=0 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.939461 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" event={"ID":"63f2899c-3ee5-4d2c-ae4f-487783fede07","Type":"ContainerDied","Data":"4e5d5fbe6f7326436f09c1eeb706af22dd1889f9d31180f26e9f3a4622f566e8"} Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942328 5008 generic.go:334] "Generic (PLEG): container finished" podID="c81636ad-f799-43f6-8304-b2121e7bb427" containerID="1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71" exitCode=0 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942346 5008 generic.go:334] "Generic (PLEG): container finished" podID="c81636ad-f799-43f6-8304-b2121e7bb427" containerID="6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d" exitCode=2 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942355 5008 generic.go:334] "Generic (PLEG): container finished" podID="c81636ad-f799-43f6-8304-b2121e7bb427" containerID="57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce" exitCode=0 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942417 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerDied","Data":"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71"} Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942463 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerDied","Data":"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d"} Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942476 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerDied","Data":"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.455091 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.532154 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts\") pod \"110f96e6-c230-44f3-9247-90283da8976c\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.532271 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5t4k\" (UniqueName: \"kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k\") pod \"110f96e6-c230-44f3-9247-90283da8976c\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.533146 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "110f96e6-c230-44f3-9247-90283da8976c" (UID: "110f96e6-c230-44f3-9247-90283da8976c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.537898 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k" (OuterVolumeSpecName: "kube-api-access-b5t4k") pod "110f96e6-c230-44f3-9247-90283da8976c" (UID: "110f96e6-c230-44f3-9247-90283da8976c"). InnerVolumeSpecName "kube-api-access-b5t4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.592731 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.596123 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.600622 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633638 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts\") pod \"7f34f608-b2f8-452e-8f0d-ef600929c36e\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633754 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633813 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633847 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633918 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh4cb\" (UniqueName: \"kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb\") pod \"7f34f608-b2f8-452e-8f0d-ef600929c36e\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633963 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634006 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634024 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdg6w\" (UniqueName: \"kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w\") pod \"d6a58042-fefd-43b8-b186-905dcfc7b1af\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634077 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts\") pod \"d6a58042-fefd-43b8-b186-905dcfc7b1af\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634105 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhbxw\" (UniqueName: \"kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634127 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634189 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f34f608-b2f8-452e-8f0d-ef600929c36e" (UID: "7f34f608-b2f8-452e-8f0d-ef600929c36e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634989 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.635078 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.635146 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5t4k\" (UniqueName: \"kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.638138 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6a58042-fefd-43b8-b186-905dcfc7b1af" (UID: "d6a58042-fefd-43b8-b186-905dcfc7b1af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.647021 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts" (OuterVolumeSpecName: "scripts") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.647085 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs" (OuterVolumeSpecName: "logs") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.677077 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw" (OuterVolumeSpecName: "kube-api-access-qhbxw") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "kube-api-access-qhbxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.723389 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb" (OuterVolumeSpecName: "kube-api-access-wh4cb") pod "7f34f608-b2f8-452e-8f0d-ef600929c36e" (UID: "7f34f608-b2f8-452e-8f0d-ef600929c36e"). InnerVolumeSpecName "kube-api-access-wh4cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.723824 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w" (OuterVolumeSpecName: "kube-api-access-gdg6w") pod "d6a58042-fefd-43b8-b186-905dcfc7b1af" (UID: "d6a58042-fefd-43b8-b186-905dcfc7b1af"). InnerVolumeSpecName "kube-api-access-gdg6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.731969 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.732231 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-log" containerID="cri-o://dd3b252c8faadfc964f08468ca0dd6531af9e9a227235dd0778b9ecd9c6cebce" gracePeriod=30 Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.732296 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-httpd" containerID="cri-o://545a1369d45b715a3fe719964ed37da74cd517e9b86ae7060e6fa55a82e6ac61" gracePeriod=30 Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757617 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh4cb\" (UniqueName: \"kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757656 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdg6w\" (UniqueName: \"kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757668 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757679 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhbxw\" (UniqueName: \"kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757691 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757702 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.770869 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.770971 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data" (OuterVolumeSpecName: "config-data") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.821068 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.840275 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.859031 5008 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.859076 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.859088 5008 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.859101 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.954422 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9xnkt" event={"ID":"d6a58042-fefd-43b8-b186-905dcfc7b1af","Type":"ContainerDied","Data":"aaf43661078ac1a9bfb08bc59c79813429bb3816596b5d120e45991a198b87c8"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.954464 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf43661078ac1a9bfb08bc59c79813429bb3816596b5d120e45991a198b87c8" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.954431 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.956991 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.957034 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-stxgj" event={"ID":"110f96e6-c230-44f3-9247-90283da8976c","Type":"ContainerDied","Data":"54f43a8eeb4abb125a006167955d4625d7a73d504efd41b8523df427c164efa6"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.957189 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54f43a8eeb4abb125a006167955d4625d7a73d504efd41b8523df427c164efa6" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.960928 5008 generic.go:334] "Generic (PLEG): container finished" podID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerID="c487f572a202948b8d78e72676270d3b2c63fcc77e90c053860ecb9f63566609" exitCode=0 Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.961066 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerDied","Data":"c487f572a202948b8d78e72676270d3b2c63fcc77e90c053860ecb9f63566609"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.970400 5008 generic.go:334] "Generic (PLEG): container finished" podID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerID="dd3b252c8faadfc964f08468ca0dd6531af9e9a227235dd0778b9ecd9c6cebce" exitCode=143 Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.970513 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerDied","Data":"dd3b252c8faadfc964f08468ca0dd6531af9e9a227235dd0778b9ecd9c6cebce"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.976129 5008 generic.go:334] "Generic (PLEG): container finished" podID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerID="eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a" exitCode=0 Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.976185 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerDied","Data":"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.976412 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerDied","Data":"359a72657c9bfba53abd214342c7a1e93d76aafd5e6beccbea5acec3bf995e32"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.976522 5008 scope.go:117] "RemoveContainer" containerID="eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.976200 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:44.988701 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:44.989246 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lmdpk" event={"ID":"7f34f608-b2f8-452e-8f0d-ef600929c36e","Type":"ContainerDied","Data":"9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee"} Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:44.989280 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.045812 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.060066 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.064425 5008 scope.go:117] "RemoveContainer" containerID="922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.125763 5008 scope.go:117] "RemoveContainer" containerID="eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a" Jan 29 15:50:45 crc kubenswrapper[5008]: E0129 15:50:45.126542 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a\": container with ID starting with eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a not found: ID does not exist" containerID="eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.126622 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a"} err="failed to get container status \"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a\": rpc error: code = NotFound desc = could not find container \"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a\": container with ID starting with eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a not found: ID does not exist" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.126666 5008 scope.go:117] "RemoveContainer" containerID="922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd" Jan 29 15:50:45 crc kubenswrapper[5008]: E0129 15:50:45.128484 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd\": container with ID starting with 922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd not found: ID does not exist" containerID="922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.128534 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd"} err="failed to get container status \"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd\": rpc error: code = NotFound desc = could not find container \"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd\": container with ID starting with 922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd not found: ID does not exist" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.220414 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.268585 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.268634 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw62q\" (UniqueName: \"kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.268734 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.268870 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.269092 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.269202 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.269256 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.269288 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.274569 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts" (OuterVolumeSpecName: "scripts") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.283067 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.286650 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.291053 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs" (OuterVolumeSpecName: "logs") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.298936 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q" (OuterVolumeSpecName: "kube-api-access-rw62q") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "kube-api-access-rw62q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.307981 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.323392 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.359287 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" path="/var/lib/kubelet/pods/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b/volumes" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.386319 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.386351 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.388654 5008 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.388670 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.388679 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw62q\" (UniqueName: \"kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.388692 5008 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.388714 5008 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.386955 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data" (OuterVolumeSpecName: "config-data") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.407788 5008 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.491798 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.491833 5008 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.607102 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.698066 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z7ch\" (UniqueName: \"kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch\") pod \"804a6c8c-4d3d-4949-adad-bf28d059ac39\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.698330 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts\") pod \"804a6c8c-4d3d-4949-adad-bf28d059ac39\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.699004 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "804a6c8c-4d3d-4949-adad-bf28d059ac39" (UID: "804a6c8c-4d3d-4949-adad-bf28d059ac39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.699896 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.702053 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch" (OuterVolumeSpecName: "kube-api-access-9z7ch") pod "804a6c8c-4d3d-4949-adad-bf28d059ac39" (UID: "804a6c8c-4d3d-4949-adad-bf28d059ac39"). InnerVolumeSpecName "kube-api-access-9z7ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.712560 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.799269 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts\") pod \"63f2899c-3ee5-4d2c-ae4f-487783fede07\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.799359 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zz5q\" (UniqueName: \"kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q\") pod \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.799435 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq7rj\" (UniqueName: \"kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj\") pod \"63f2899c-3ee5-4d2c-ae4f-487783fede07\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.799462 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts\") pod \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800090 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63f2899c-3ee5-4d2c-ae4f-487783fede07" (UID: "63f2899c-3ee5-4d2c-ae4f-487783fede07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800571 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" (UID: "ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800826 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z7ch\" (UniqueName: \"kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800849 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800860 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800872 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.804059 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj" (OuterVolumeSpecName: "kube-api-access-pq7rj") pod "63f2899c-3ee5-4d2c-ae4f-487783fede07" (UID: "63f2899c-3ee5-4d2c-ae4f-487783fede07"). InnerVolumeSpecName "kube-api-access-pq7rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.805516 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q" (OuterVolumeSpecName: "kube-api-access-2zz5q") pod "ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" (UID: "ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e"). InnerVolumeSpecName "kube-api-access-2zz5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.902477 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zz5q\" (UniqueName: \"kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.902519 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq7rj\" (UniqueName: \"kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.001862 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" event={"ID":"804a6c8c-4d3d-4949-adad-bf28d059ac39","Type":"ContainerDied","Data":"c441bacddbbf24594f7845afb68dc94be9cda37d3cadf2779f979bf27b1d5a46"} Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.001938 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c441bacddbbf24594f7845afb68dc94be9cda37d3cadf2779f979bf27b1d5a46" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.001868 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.004524 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerDied","Data":"7e694d90fa6a6ef1130c12d5f4ef32d5a6b46fd7321b4f1fabcb430d1ab3333d"} Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.004591 5008 scope.go:117] "RemoveContainer" containerID="c487f572a202948b8d78e72676270d3b2c63fcc77e90c053860ecb9f63566609" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.004798 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.008611 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" event={"ID":"63f2899c-3ee5-4d2c-ae4f-487783fede07","Type":"ContainerDied","Data":"9b12f47cdbb2b896c48220d1aac0e8e6b7220c6ea2c4ff4cf2b76a913ef44a53"} Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.008716 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b12f47cdbb2b896c48220d1aac0e8e6b7220c6ea2c4ff4cf2b76a913ef44a53" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.008951 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.018367 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e284-account-create-update-cz9rj" event={"ID":"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e","Type":"ContainerDied","Data":"50a2b0760e4fa9cc3fb045d185bf9670bd499e7f4ef0f98235ea9f3653af510c"} Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.018407 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a2b0760e4fa9cc3fb045d185bf9670bd499e7f4ef0f98235ea9f3653af510c" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.018476 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.055269 5008 scope.go:117] "RemoveContainer" containerID="e0fa9f1865b5505ccd4891898d3b56eec542add6175364fd360ee56950f55bac" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.085742 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.100848 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.109743 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126191 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f2899c-3ee5-4d2c-ae4f-487783fede07" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126227 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f2899c-3ee5-4d2c-ae4f-487783fede07" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126242 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-httpd" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126248 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-httpd" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126257 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a58042-fefd-43b8-b186-905dcfc7b1af" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126262 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a58042-fefd-43b8-b186-905dcfc7b1af" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126279 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804a6c8c-4d3d-4949-adad-bf28d059ac39" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126286 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="804a6c8c-4d3d-4949-adad-bf28d059ac39" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126296 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126301 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126314 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-log" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126319 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-log" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126325 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-api" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126330 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-api" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126345 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110f96e6-c230-44f3-9247-90283da8976c" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126350 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="110f96e6-c230-44f3-9247-90283da8976c" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126360 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f34f608-b2f8-452e-8f0d-ef600929c36e" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126366 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f34f608-b2f8-452e-8f0d-ef600929c36e" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126380 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-log" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126386 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-log" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126542 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-api" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126552 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126562 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="110f96e6-c230-44f3-9247-90283da8976c" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126570 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-log" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126579 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f2899c-3ee5-4d2c-ae4f-487783fede07" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126592 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-log" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126606 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-httpd" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126614 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a58042-fefd-43b8-b186-905dcfc7b1af" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126624 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f34f608-b2f8-452e-8f0d-ef600929c36e" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126632 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="804a6c8c-4d3d-4949-adad-bf28d059ac39" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.127437 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.127522 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.139690 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.139695 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207096 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwggz\" (UniqueName: \"kubernetes.io/projected/b210097f-985c-4014-a76e-b430ef390fce-kube-api-access-bwggz\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207149 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207169 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207196 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207213 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-scripts\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207240 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-logs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207283 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-config-data\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207305 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308594 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwggz\" (UniqueName: \"kubernetes.io/projected/b210097f-985c-4014-a76e-b430ef390fce-kube-api-access-bwggz\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308685 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308705 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308727 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-scripts\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308746 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308770 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-logs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308804 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-config-data\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308828 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.309224 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.309293 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-logs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.309356 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.314567 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.315172 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.315240 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-config-data\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.319401 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-scripts\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.330084 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwggz\" (UniqueName: \"kubernetes.io/projected/b210097f-985c-4014-a76e-b430ef390fce-kube-api-access-bwggz\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.339358 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.455468 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:50:47 crc kubenswrapper[5008]: I0129 15:50:47.199679 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:47 crc kubenswrapper[5008]: I0129 15:50:47.334481 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" path="/var/lib/kubelet/pods/a4572386-a7c3-434a-8bcb-d1643d6893c9/volumes" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.053804 5008 generic.go:334] "Generic (PLEG): container finished" podID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerID="545a1369d45b715a3fe719964ed37da74cd517e9b86ae7060e6fa55a82e6ac61" exitCode=0 Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.054074 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerDied","Data":"545a1369d45b715a3fe719964ed37da74cd517e9b86ae7060e6fa55a82e6ac61"} Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.080691 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b210097f-985c-4014-a76e-b430ef390fce","Type":"ContainerStarted","Data":"1c77dc5a1c47165cc89495ccf8800d8e17aa07125ab958bde86eb223c06adc95"} Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.080754 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b210097f-985c-4014-a76e-b430ef390fce","Type":"ContainerStarted","Data":"a8f0c3e7553f02acd8bc69cfc8d32757da715fe2a5c25e250d1acf3cc83d59b1"} Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.639506 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.788820 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789111 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzqpv\" (UniqueName: \"kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789189 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789261 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789289 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789331 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789384 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789418 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789412 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789935 5008 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.790503 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs" (OuterVolumeSpecName: "logs") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.796386 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.796406 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts" (OuterVolumeSpecName: "scripts") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.797085 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv" (OuterVolumeSpecName: "kube-api-access-wzqpv") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "kube-api-access-wzqpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.822905 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.853192 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data" (OuterVolumeSpecName: "config-data") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.854985 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.891490 5008 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.891740 5008 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.891874 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.891972 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.892054 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.892134 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.892206 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzqpv\" (UniqueName: \"kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.923644 5008 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.993308 5008 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.092371 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerDied","Data":"d5ff4add692e0bdecfe0d236bfcf204bfe9c6a37130e4e5f390ced855d6ac026"} Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.092396 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.092430 5008 scope.go:117] "RemoveContainer" containerID="545a1369d45b715a3fe719964ed37da74cd517e9b86ae7060e6fa55a82e6ac61" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.096042 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b210097f-985c-4014-a76e-b430ef390fce","Type":"ContainerStarted","Data":"92c70c8e7f911b9a5337dd362e47e177fc7522ef2c3e0b34c3e165d1d390335d"} Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.121851 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.12183069 podStartE2EDuration="3.12183069s" podCreationTimestamp="2026-01-29 15:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:49.115308781 +0000 UTC m=+1392.788163018" watchObservedRunningTime="2026-01-29 15:50:49.12183069 +0000 UTC m=+1392.794684927" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.130597 5008 scope.go:117] "RemoveContainer" containerID="dd3b252c8faadfc964f08468ca0dd6531af9e9a227235dd0778b9ecd9c6cebce" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.154171 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.164094 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.182843 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:49 crc kubenswrapper[5008]: E0129 15:50:49.184148 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-httpd" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.184195 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-httpd" Jan 29 15:50:49 crc kubenswrapper[5008]: E0129 15:50:49.184227 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-log" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.187800 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-log" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.188542 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-httpd" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.188569 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-log" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.190394 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.195747 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.195855 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.197446 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297344 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297396 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297432 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-logs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297453 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf42m\" (UniqueName: \"kubernetes.io/projected/d30face9-2636-4cb7-8e84-8558b7b40df4-kube-api-access-rf42m\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297491 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297529 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297597 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297651 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.334086 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" path="/var/lib/kubelet/pods/deb07ec3-dbb1-49c4-a9cc-155472fc28bd/volumes" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399303 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399388 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399415 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399436 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399462 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-logs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399478 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf42m\" (UniqueName: \"kubernetes.io/projected/d30face9-2636-4cb7-8e84-8558b7b40df4-kube-api-access-rf42m\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399493 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399536 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.400705 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.400723 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.400970 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-logs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.404349 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.404526 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.404886 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.404948 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.432415 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf42m\" (UniqueName: \"kubernetes.io/projected/d30face9-2636-4cb7-8e84-8558b7b40df4-kube-api-access-rf42m\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.439211 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.518621 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.656995 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9mffk"] Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.658383 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.660353 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.660737 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-s4fbc" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.665589 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.673383 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9mffk"] Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.808207 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls57p\" (UniqueName: \"kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.808261 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.808348 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.808382 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.909605 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.909687 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.909744 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls57p\" (UniqueName: \"kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.909764 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.915377 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.915736 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.919701 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.934973 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls57p\" (UniqueName: \"kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.981939 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:50 crc kubenswrapper[5008]: I0129 15:50:50.118286 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:50 crc kubenswrapper[5008]: W0129 15:50:50.128706 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd30face9_2636_4cb7_8e84_8558b7b40df4.slice/crio-49ac2c3c603bb6f8c398ea508483ca0ced12a9f9ffcece09ffdfc60f9c90cba3 WatchSource:0}: Error finding container 49ac2c3c603bb6f8c398ea508483ca0ced12a9f9ffcece09ffdfc60f9c90cba3: Status 404 returned error can't find the container with id 49ac2c3c603bb6f8c398ea508483ca0ced12a9f9ffcece09ffdfc60f9c90cba3 Jan 29 15:50:50 crc kubenswrapper[5008]: I0129 15:50:50.291742 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9mffk"] Jan 29 15:50:50 crc kubenswrapper[5008]: W0129 15:50:50.295838 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171 WatchSource:0}: Error finding container 9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171: Status 404 returned error can't find the container with id 9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171 Jan 29 15:50:51 crc kubenswrapper[5008]: I0129 15:50:51.118573 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d30face9-2636-4cb7-8e84-8558b7b40df4","Type":"ContainerStarted","Data":"7d923a651364584dd0a68975de72d53fe72eae96ed00ce0b324f3cba07f9ce12"} Jan 29 15:50:51 crc kubenswrapper[5008]: I0129 15:50:51.118954 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d30face9-2636-4cb7-8e84-8558b7b40df4","Type":"ContainerStarted","Data":"49ac2c3c603bb6f8c398ea508483ca0ced12a9f9ffcece09ffdfc60f9c90cba3"} Jan 29 15:50:51 crc kubenswrapper[5008]: I0129 15:50:51.124346 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9mffk" event={"ID":"00b42485-f42b-4ca6-8e84-1a795454dd9f","Type":"ContainerStarted","Data":"9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171"} Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.149408 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d30face9-2636-4cb7-8e84-8558b7b40df4","Type":"ContainerStarted","Data":"7b624b6342c7c0ec0d24499d3b0550c0800023f732ceb5a4c809881749409b62"} Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.183157 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.183141108 podStartE2EDuration="3.183141108s" podCreationTimestamp="2026-01-29 15:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:52.176699522 +0000 UTC m=+1395.849553759" watchObservedRunningTime="2026-01-29 15:50:52.183141108 +0000 UTC m=+1395.855995345" Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.875695 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982416 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982477 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982581 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rfpc\" (UniqueName: \"kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982627 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982722 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982772 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982882 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.984163 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.985637 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.987719 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc" (OuterVolumeSpecName: "kube-api-access-6rfpc") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "kube-api-access-6rfpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.992617 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts" (OuterVolumeSpecName: "scripts") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.008849 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.064029 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085501 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rfpc\" (UniqueName: \"kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085534 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085543 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085552 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085561 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085572 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.097114 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data" (OuterVolumeSpecName: "config-data") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.162271 5008 generic.go:334] "Generic (PLEG): container finished" podID="c81636ad-f799-43f6-8304-b2121e7bb427" containerID="5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4" exitCode=0 Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.162347 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerDied","Data":"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4"} Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.162367 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.162403 5008 scope.go:117] "RemoveContainer" containerID="1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.162393 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerDied","Data":"0e23d38c1351d3b9d8ce539ce39bcaaeb12db97fb4d36c36c739e94b79c66551"} Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.190720 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.201910 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.203762 5008 scope.go:117] "RemoveContainer" containerID="6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.221969 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.246658 5008 scope.go:117] "RemoveContainer" containerID="57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.256913 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.257576 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="sg-core" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257599 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="sg-core" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.257617 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="proxy-httpd" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257625 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="proxy-httpd" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.257642 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-central-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257648 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-central-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.257671 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-notification-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257677 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-notification-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257857 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-notification-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257878 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="proxy-httpd" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257885 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="sg-core" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257896 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-central-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.264469 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.268135 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.269260 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.280484 5008 scope.go:117] "RemoveContainer" containerID="5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.281652 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.311493 5008 scope.go:117] "RemoveContainer" containerID="1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.311897 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71\": container with ID starting with 1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71 not found: ID does not exist" containerID="1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.311953 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71"} err="failed to get container status \"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71\": rpc error: code = NotFound desc = could not find container \"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71\": container with ID starting with 1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71 not found: ID does not exist" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.311975 5008 scope.go:117] "RemoveContainer" containerID="6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.312496 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d\": container with ID starting with 6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d not found: ID does not exist" containerID="6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.312516 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d"} err="failed to get container status \"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d\": rpc error: code = NotFound desc = could not find container \"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d\": container with ID starting with 6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d not found: ID does not exist" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.312531 5008 scope.go:117] "RemoveContainer" containerID="57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.312759 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce\": container with ID starting with 57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce not found: ID does not exist" containerID="57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.312805 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce"} err="failed to get container status \"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce\": rpc error: code = NotFound desc = could not find container \"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce\": container with ID starting with 57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce not found: ID does not exist" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.312818 5008 scope.go:117] "RemoveContainer" containerID="5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.313188 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4\": container with ID starting with 5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4 not found: ID does not exist" containerID="5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.313213 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4"} err="failed to get container status \"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4\": rpc error: code = NotFound desc = could not find container \"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4\": container with ID starting with 5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4 not found: ID does not exist" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.344531 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" path="/var/lib/kubelet/pods/c81636ad-f799-43f6-8304-b2121e7bb427/volumes" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398573 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58fcv\" (UniqueName: \"kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398628 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398712 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398742 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398774 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398839 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398869 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517734 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58fcv\" (UniqueName: \"kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517794 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517856 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517879 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517904 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517937 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517961 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.518377 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.521294 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.522372 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.522435 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.523442 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.532876 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.534767 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58fcv\" (UniqueName: \"kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.611945 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:54 crc kubenswrapper[5008]: I0129 15:50:54.074228 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:54 crc kubenswrapper[5008]: W0129 15:50:54.076858 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36d8b2f2_f15e_4b9a_a522_35d228919444.slice/crio-1da072cac2f699d8a48c0ad66c9f3278b5a7e28c720fc4e5dc3e5e7db0670e0e WatchSource:0}: Error finding container 1da072cac2f699d8a48c0ad66c9f3278b5a7e28c720fc4e5dc3e5e7db0670e0e: Status 404 returned error can't find the container with id 1da072cac2f699d8a48c0ad66c9f3278b5a7e28c720fc4e5dc3e5e7db0670e0e Jan 29 15:50:54 crc kubenswrapper[5008]: I0129 15:50:54.176174 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerStarted","Data":"1da072cac2f699d8a48c0ad66c9f3278b5a7e28c720fc4e5dc3e5e7db0670e0e"} Jan 29 15:50:56 crc kubenswrapper[5008]: I0129 15:50:56.456044 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 15:50:56 crc kubenswrapper[5008]: I0129 15:50:56.457614 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 15:50:56 crc kubenswrapper[5008]: I0129 15:50:56.494385 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 15:50:56 crc kubenswrapper[5008]: I0129 15:50:56.505399 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 15:50:57 crc kubenswrapper[5008]: I0129 15:50:57.203053 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 15:50:57 crc kubenswrapper[5008]: I0129 15:50:57.203691 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.224887 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.225286 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.236044 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.245377 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.519016 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.519297 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.565487 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.575272 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 15:51:00 crc kubenswrapper[5008]: I0129 15:51:00.236008 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 15:51:00 crc kubenswrapper[5008]: I0129 15:51:00.236053 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 15:51:01 crc kubenswrapper[5008]: I0129 15:51:01.814071 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.232965 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.252623 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9mffk" event={"ID":"00b42485-f42b-4ca6-8e84-1a795454dd9f","Type":"ContainerStarted","Data":"cae76da1b19104ec9ac0d79d4c0c18c044c82a9e0fb4665e780db9f6a9a1f05e"} Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.254452 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerStarted","Data":"333aa71748cf0dbcc8fddbab51dff8ff1acaa47f116a066d74485824ee50dd82"} Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.254470 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.274401 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-9mffk" podStartSLOduration=2.481375735 podStartE2EDuration="13.274383521s" podCreationTimestamp="2026-01-29 15:50:49 +0000 UTC" firstStartedPulling="2026-01-29 15:50:50.300917841 +0000 UTC m=+1393.973772078" lastFinishedPulling="2026-01-29 15:51:01.093925627 +0000 UTC m=+1404.766779864" observedRunningTime="2026-01-29 15:51:02.269721759 +0000 UTC m=+1405.942575996" watchObservedRunningTime="2026-01-29 15:51:02.274383521 +0000 UTC m=+1405.947237758" Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.349299 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 15:51:03 crc kubenswrapper[5008]: I0129 15:51:03.264955 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerStarted","Data":"35227769b955309ab8713be39a1f2ffd968e6a4bd2b991d2c6531b44270ba0a3"} Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.463113 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.468998 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.488637 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.605879 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.606157 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2268w\" (UniqueName: \"kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.606307 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.707774 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.708184 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2268w\" (UniqueName: \"kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.708376 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.709134 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.709545 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.728632 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2268w\" (UniqueName: \"kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.792469 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:08 crc kubenswrapper[5008]: I0129 15:51:08.255645 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:08 crc kubenswrapper[5008]: I0129 15:51:08.316400 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerStarted","Data":"0fc90fe6f216734f5716f0b5ec0a72d9a7f69d6941dabccd64c564046956dd2e"} Jan 29 15:51:08 crc kubenswrapper[5008]: I0129 15:51:08.322089 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerStarted","Data":"4d4a03cbd90fc3060c9fd69659de0e9b052b60e67708f55848d807bfe4b811fa"} Jan 29 15:51:10 crc kubenswrapper[5008]: I0129 15:51:10.344524 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerStarted","Data":"462959e0a3731e52e7f85f03aa4b504ea2a9ab52231f3ec4dbe2d3b003c0cc7b"} Jan 29 15:51:11 crc kubenswrapper[5008]: I0129 15:51:11.358691 5008 generic.go:334] "Generic (PLEG): container finished" podID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerID="462959e0a3731e52e7f85f03aa4b504ea2a9ab52231f3ec4dbe2d3b003c0cc7b" exitCode=0 Jan 29 15:51:11 crc kubenswrapper[5008]: I0129 15:51:11.358765 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerDied","Data":"462959e0a3731e52e7f85f03aa4b504ea2a9ab52231f3ec4dbe2d3b003c0cc7b"} Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.372218 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerStarted","Data":"f7b95015db9e59af7b1eeefd93b153512b2e48feff7adc929690e1b45d7dbec2"} Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.378260 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerStarted","Data":"3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28"} Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.378574 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-central-agent" containerID="cri-o://333aa71748cf0dbcc8fddbab51dff8ff1acaa47f116a066d74485824ee50dd82" gracePeriod=30 Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.378877 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.378943 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="proxy-httpd" containerID="cri-o://3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28" gracePeriod=30 Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.379033 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="sg-core" containerID="cri-o://4d4a03cbd90fc3060c9fd69659de0e9b052b60e67708f55848d807bfe4b811fa" gracePeriod=30 Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.379111 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-notification-agent" containerID="cri-o://35227769b955309ab8713be39a1f2ffd968e6a4bd2b991d2c6531b44270ba0a3" gracePeriod=30 Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.393659 5008 generic.go:334] "Generic (PLEG): container finished" podID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerID="f7b95015db9e59af7b1eeefd93b153512b2e48feff7adc929690e1b45d7dbec2" exitCode=0 Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.393774 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerDied","Data":"f7b95015db9e59af7b1eeefd93b153512b2e48feff7adc929690e1b45d7dbec2"} Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400563 5008 generic.go:334] "Generic (PLEG): container finished" podID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerID="4d4a03cbd90fc3060c9fd69659de0e9b052b60e67708f55848d807bfe4b811fa" exitCode=2 Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400608 5008 generic.go:334] "Generic (PLEG): container finished" podID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerID="35227769b955309ab8713be39a1f2ffd968e6a4bd2b991d2c6531b44270ba0a3" exitCode=0 Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400621 5008 generic.go:334] "Generic (PLEG): container finished" podID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerID="333aa71748cf0dbcc8fddbab51dff8ff1acaa47f116a066d74485824ee50dd82" exitCode=0 Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400631 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerDied","Data":"4d4a03cbd90fc3060c9fd69659de0e9b052b60e67708f55848d807bfe4b811fa"} Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400701 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerDied","Data":"35227769b955309ab8713be39a1f2ffd968e6a4bd2b991d2c6531b44270ba0a3"} Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400713 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerDied","Data":"333aa71748cf0dbcc8fddbab51dff8ff1acaa47f116a066d74485824ee50dd82"} Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.428413 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.515717908 podStartE2EDuration="20.428387087s" podCreationTimestamp="2026-01-29 15:50:53 +0000 UTC" firstStartedPulling="2026-01-29 15:50:54.08120993 +0000 UTC m=+1397.754064167" lastFinishedPulling="2026-01-29 15:51:11.993879109 +0000 UTC m=+1415.666733346" observedRunningTime="2026-01-29 15:51:12.427926528 +0000 UTC m=+1416.100780795" watchObservedRunningTime="2026-01-29 15:51:13.428387087 +0000 UTC m=+1417.101241334" Jan 29 15:51:14 crc kubenswrapper[5008]: I0129 15:51:14.415002 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerStarted","Data":"ae6477014f197c0c059afc09d06201a6ab5fe21275e0fd3dbd3b46238154e186"} Jan 29 15:51:14 crc kubenswrapper[5008]: I0129 15:51:14.444628 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-klfxq" podStartSLOduration=4.847403096 podStartE2EDuration="7.444609737s" podCreationTimestamp="2026-01-29 15:51:07 +0000 UTC" firstStartedPulling="2026-01-29 15:51:11.360301511 +0000 UTC m=+1415.033155768" lastFinishedPulling="2026-01-29 15:51:13.957508172 +0000 UTC m=+1417.630362409" observedRunningTime="2026-01-29 15:51:14.437241008 +0000 UTC m=+1418.110095255" watchObservedRunningTime="2026-01-29 15:51:14.444609737 +0000 UTC m=+1418.117463974" Jan 29 15:51:17 crc kubenswrapper[5008]: I0129 15:51:17.793340 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:17 crc kubenswrapper[5008]: I0129 15:51:17.793680 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:18 crc kubenswrapper[5008]: I0129 15:51:18.843426 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-klfxq" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="registry-server" probeResult="failure" output=< Jan 29 15:51:18 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:51:18 crc kubenswrapper[5008]: > Jan 29 15:51:23 crc kubenswrapper[5008]: I0129 15:51:23.658081 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 15:51:27 crc kubenswrapper[5008]: I0129 15:51:27.852287 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:27 crc kubenswrapper[5008]: I0129 15:51:27.934529 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:28 crc kubenswrapper[5008]: I0129 15:51:28.647134 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:29 crc kubenswrapper[5008]: I0129 15:51:29.561350 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-klfxq" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="registry-server" containerID="cri-o://ae6477014f197c0c059afc09d06201a6ab5fe21275e0fd3dbd3b46238154e186" gracePeriod=2 Jan 29 15:51:30 crc kubenswrapper[5008]: I0129 15:51:30.576217 5008 generic.go:334] "Generic (PLEG): container finished" podID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerID="ae6477014f197c0c059afc09d06201a6ab5fe21275e0fd3dbd3b46238154e186" exitCode=0 Jan 29 15:51:30 crc kubenswrapper[5008]: I0129 15:51:30.576303 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerDied","Data":"ae6477014f197c0c059afc09d06201a6ab5fe21275e0fd3dbd3b46238154e186"} Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.482038 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.576934 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content\") pod \"4463fec1-8026-4831-9f99-d7b8ba936dc2\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.577236 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2268w\" (UniqueName: \"kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w\") pod \"4463fec1-8026-4831-9f99-d7b8ba936dc2\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.577273 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities\") pod \"4463fec1-8026-4831-9f99-d7b8ba936dc2\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.578258 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities" (OuterVolumeSpecName: "utilities") pod "4463fec1-8026-4831-9f99-d7b8ba936dc2" (UID: "4463fec1-8026-4831-9f99-d7b8ba936dc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.582720 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w" (OuterVolumeSpecName: "kube-api-access-2268w") pod "4463fec1-8026-4831-9f99-d7b8ba936dc2" (UID: "4463fec1-8026-4831-9f99-d7b8ba936dc2"). InnerVolumeSpecName "kube-api-access-2268w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.588529 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerDied","Data":"0fc90fe6f216734f5716f0b5ec0a72d9a7f69d6941dabccd64c564046956dd2e"} Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.588574 5008 scope.go:117] "RemoveContainer" containerID="ae6477014f197c0c059afc09d06201a6ab5fe21275e0fd3dbd3b46238154e186" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.588611 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.678701 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2268w\" (UniqueName: \"kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.678767 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.714661 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4463fec1-8026-4831-9f99-d7b8ba936dc2" (UID: "4463fec1-8026-4831-9f99-d7b8ba936dc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.780020 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.925360 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.939231 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:32 crc kubenswrapper[5008]: I0129 15:51:32.518276 5008 scope.go:117] "RemoveContainer" containerID="f7b95015db9e59af7b1eeefd93b153512b2e48feff7adc929690e1b45d7dbec2" Jan 29 15:51:32 crc kubenswrapper[5008]: I0129 15:51:32.559128 5008 scope.go:117] "RemoveContainer" containerID="462959e0a3731e52e7f85f03aa4b504ea2a9ab52231f3ec4dbe2d3b003c0cc7b" Jan 29 15:51:33 crc kubenswrapper[5008]: I0129 15:51:33.339706 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" path="/var/lib/kubelet/pods/4463fec1-8026-4831-9f99-d7b8ba936dc2/volumes" Jan 29 15:51:37 crc kubenswrapper[5008]: I0129 15:51:37.657371 5008 generic.go:334] "Generic (PLEG): container finished" podID="00b42485-f42b-4ca6-8e84-1a795454dd9f" containerID="cae76da1b19104ec9ac0d79d4c0c18c044c82a9e0fb4665e780db9f6a9a1f05e" exitCode=0 Jan 29 15:51:37 crc kubenswrapper[5008]: I0129 15:51:37.657539 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9mffk" event={"ID":"00b42485-f42b-4ca6-8e84-1a795454dd9f","Type":"ContainerDied","Data":"cae76da1b19104ec9ac0d79d4c0c18c044c82a9e0fb4665e780db9f6a9a1f05e"} Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.027695 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.207843 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls57p\" (UniqueName: \"kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p\") pod \"00b42485-f42b-4ca6-8e84-1a795454dd9f\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.208241 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data\") pod \"00b42485-f42b-4ca6-8e84-1a795454dd9f\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.208312 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts\") pod \"00b42485-f42b-4ca6-8e84-1a795454dd9f\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.208356 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle\") pod \"00b42485-f42b-4ca6-8e84-1a795454dd9f\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.213808 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts" (OuterVolumeSpecName: "scripts") pod "00b42485-f42b-4ca6-8e84-1a795454dd9f" (UID: "00b42485-f42b-4ca6-8e84-1a795454dd9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.216966 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p" (OuterVolumeSpecName: "kube-api-access-ls57p") pod "00b42485-f42b-4ca6-8e84-1a795454dd9f" (UID: "00b42485-f42b-4ca6-8e84-1a795454dd9f"). InnerVolumeSpecName "kube-api-access-ls57p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.241318 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00b42485-f42b-4ca6-8e84-1a795454dd9f" (UID: "00b42485-f42b-4ca6-8e84-1a795454dd9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.243886 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data" (OuterVolumeSpecName: "config-data") pod "00b42485-f42b-4ca6-8e84-1a795454dd9f" (UID: "00b42485-f42b-4ca6-8e84-1a795454dd9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.311129 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls57p\" (UniqueName: \"kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.311180 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.311194 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.311205 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.677473 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9mffk" event={"ID":"00b42485-f42b-4ca6-8e84-1a795454dd9f","Type":"ContainerDied","Data":"9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171"} Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.677510 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.677568 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.817357 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 15:51:39 crc kubenswrapper[5008]: E0129 15:51:39.817912 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="registry-server" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.817943 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="registry-server" Jan 29 15:51:39 crc kubenswrapper[5008]: E0129 15:51:39.817974 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="extract-content" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.817987 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="extract-content" Jan 29 15:51:39 crc kubenswrapper[5008]: E0129 15:51:39.818005 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="extract-utilities" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.818017 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="extract-utilities" Jan 29 15:51:39 crc kubenswrapper[5008]: E0129 15:51:39.818072 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00b42485-f42b-4ca6-8e84-1a795454dd9f" containerName="nova-cell0-conductor-db-sync" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.818086 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="00b42485-f42b-4ca6-8e84-1a795454dd9f" containerName="nova-cell0-conductor-db-sync" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.818389 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="00b42485-f42b-4ca6-8e84-1a795454dd9f" containerName="nova-cell0-conductor-db-sync" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.818415 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="registry-server" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.819322 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.826058 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.826093 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-s4fbc" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.830849 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.921891 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.921990 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.922316 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btcgl\" (UniqueName: \"kubernetes.io/projected/fc7804a1-e957-4095-b882-901a403bce11-kube-api-access-btcgl\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.024710 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.025064 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.025930 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btcgl\" (UniqueName: \"kubernetes.io/projected/fc7804a1-e957-4095-b882-901a403bce11-kube-api-access-btcgl\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.028809 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.028890 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.041140 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btcgl\" (UniqueName: \"kubernetes.io/projected/fc7804a1-e957-4095-b882-901a403bce11-kube-api-access-btcgl\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.145256 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.606485 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.686685 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fc7804a1-e957-4095-b882-901a403bce11","Type":"ContainerStarted","Data":"4d79723cffb908add611004361bb98aa1374424b0a267ad0392e0fad3299d496"} Jan 29 15:51:41 crc kubenswrapper[5008]: I0129 15:51:41.700702 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fc7804a1-e957-4095-b882-901a403bce11","Type":"ContainerStarted","Data":"5819b755290290b2c26f61417a55999054b0b315e48cf27e3ed3f924cc962e36"} Jan 29 15:51:41 crc kubenswrapper[5008]: I0129 15:51:41.700972 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:41 crc kubenswrapper[5008]: I0129 15:51:41.743287 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.743259352 podStartE2EDuration="2.743259352s" podCreationTimestamp="2026-01-29 15:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:51:41.72790695 +0000 UTC m=+1445.400761267" watchObservedRunningTime="2026-01-29 15:51:41.743259352 +0000 UTC m=+1445.416113629" Jan 29 15:51:42 crc kubenswrapper[5008]: E0129 15:51:42.644114 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36d8b2f2_f15e_4b9a_a522_35d228919444.slice/crio-conmon-3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:51:42 crc kubenswrapper[5008]: I0129 15:51:42.713020 5008 generic.go:334] "Generic (PLEG): container finished" podID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerID="3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28" exitCode=137 Jan 29 15:51:42 crc kubenswrapper[5008]: I0129 15:51:42.713061 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerDied","Data":"3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28"} Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.275386 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.388422 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.388854 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.388974 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58fcv\" (UniqueName: \"kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389015 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389061 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389137 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389167 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389280 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389512 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389794 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389816 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.395518 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv" (OuterVolumeSpecName: "kube-api-access-58fcv") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "kube-api-access-58fcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.410495 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts" (OuterVolumeSpecName: "scripts") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.416387 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.507236 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.507266 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.507277 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58fcv\" (UniqueName: \"kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.513998 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.535165 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data" (OuterVolumeSpecName: "config-data") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.609585 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.609640 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.728744 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerDied","Data":"1da072cac2f699d8a48c0ad66c9f3278b5a7e28c720fc4e5dc3e5e7db0670e0e"} Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.728824 5008 scope.go:117] "RemoveContainer" containerID="3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.728967 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.751330 5008 scope.go:117] "RemoveContainer" containerID="4d4a03cbd90fc3060c9fd69659de0e9b052b60e67708f55848d807bfe4b811fa" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.773459 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.778626 5008 scope.go:117] "RemoveContainer" containerID="35227769b955309ab8713be39a1f2ffd968e6a4bd2b991d2c6531b44270ba0a3" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.786459 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.816820 5008 scope.go:117] "RemoveContainer" containerID="333aa71748cf0dbcc8fddbab51dff8ff1acaa47f116a066d74485824ee50dd82" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.822605 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:43 crc kubenswrapper[5008]: E0129 15:51:43.823130 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-central-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823158 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-central-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: E0129 15:51:43.823179 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="sg-core" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823189 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="sg-core" Jan 29 15:51:43 crc kubenswrapper[5008]: E0129 15:51:43.823207 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="proxy-httpd" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823218 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="proxy-httpd" Jan 29 15:51:43 crc kubenswrapper[5008]: E0129 15:51:43.823242 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-notification-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823252 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-notification-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823562 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-notification-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823592 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="sg-core" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823609 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="proxy-httpd" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823624 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-central-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.826459 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.829800 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.829965 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.832497 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.914977 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915220 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915428 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915551 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915630 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915675 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915710 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vwdz\" (UniqueName: \"kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.990887 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.990973 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018028 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018150 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018179 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018214 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vwdz\" (UniqueName: \"kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018274 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018378 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.019078 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.019454 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.020384 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.026741 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.027014 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.027747 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.034922 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.047237 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vwdz\" (UniqueName: \"kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.154071 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.569420 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.738832 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerStarted","Data":"0c880a32127e0f9cf20872f0cb9c9103c1ec0fcb4e31857d57145ee7e6ef5eff"} Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.178947 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.336858 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" path="/var/lib/kubelet/pods/36d8b2f2-f15e-4b9a-a522-35d228919444/volumes" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.688655 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2crqc"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.689874 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.692274 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.692830 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.701942 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2crqc"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.751267 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerStarted","Data":"1f0cac0f22132fbe8eb8ceb4b6f38d3eb51e2e56dc4d95059f929e668ed362f6"} Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.857594 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.857710 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.857838 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsxq7\" (UniqueName: \"kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.857881 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.886048 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.887118 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.889951 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.912528 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.950775 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.952693 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.959117 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.959235 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.959307 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.959401 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsxq7\" (UniqueName: \"kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.977509 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.979123 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.981547 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.981581 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.982248 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.991896 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsxq7\" (UniqueName: \"kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.053849 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060746 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pfj\" (UniqueName: \"kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060846 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6cf4\" (UniqueName: \"kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060876 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060898 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060962 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.061010 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.118853 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.120436 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.125503 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.158563 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163815 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6cf4\" (UniqueName: \"kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163868 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163892 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163955 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163971 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.164068 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4pfj\" (UniqueName: \"kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.183367 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.185439 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.194303 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.195002 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.195565 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.201173 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.207086 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.213601 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.232393 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4pfj\" (UniqueName: \"kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.240418 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6cf4\" (UniqueName: \"kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.257212 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.268430 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.268705 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.269043 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zl2n\" (UniqueName: \"kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.269187 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.287843 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.291131 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.299682 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372729 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372791 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372823 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zl2n\" (UniqueName: \"kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372849 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372882 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbfkv\" (UniqueName: \"kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372924 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.373014 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.376146 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.376608 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.380176 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.407500 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zl2n\" (UniqueName: \"kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474180 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474239 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbfkv\" (UniqueName: \"kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474260 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474295 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474319 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474396 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ns2\" (UniqueName: \"kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474457 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474481 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474521 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.481446 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.489201 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.491020 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbfkv\" (UniqueName: \"kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.495991 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.508250 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.580920 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.581197 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.581925 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.581969 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.582081 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2ns2\" (UniqueName: \"kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.582116 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.582126 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.582263 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.582745 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.583033 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.583615 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.584085 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.588652 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.621078 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2ns2\" (UniqueName: \"kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.624611 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.697582 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2crqc"] Jan 29 15:51:46 crc kubenswrapper[5008]: W0129 15:51:46.772316 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef9ab07_3037_4115_bb8e_954191b169af.slice/crio-8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6 WatchSource:0}: Error finding container 8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6: Status 404 returned error can't find the container with id 8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6 Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.796291 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerStarted","Data":"816da0ccd258b96ae016602b4eb20317eab184c219bbd3b28be883eb79a29a14"} Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.919922 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k5vpb"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.921439 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.923813 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.924672 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.932957 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k5vpb"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.010130 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.082415 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.095624 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84qhc\" (UniqueName: \"kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.095678 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.095731 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.095769 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.197768 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84qhc\" (UniqueName: \"kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.198065 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.198091 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.198130 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.205522 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.206685 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.218039 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.220081 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84qhc\" (UniqueName: \"kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.226497 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.268232 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.438599 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.448515 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.784090 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k5vpb"] Jan 29 15:51:47 crc kubenswrapper[5008]: W0129 15:51:47.792903 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0d0cf25_1253_4f34_91a0_c4381d2e8a3f.slice/crio-028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec WatchSource:0}: Error finding container 028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec: Status 404 returned error can't find the container with id 028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.824038 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f0bf87f-118b-4ad5-8354-688ae93d75e8","Type":"ContainerStarted","Data":"868f4c6e442b8edd70fd72637691064134ed05f40e47973b7eb3e61bb8292d33"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.828127 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2crqc" event={"ID":"eef9ab07-3037-4115-bb8e-954191b169af","Type":"ContainerStarted","Data":"89a0838edd76e8e3384f319feeb4aa997d5c03e52a3680d202106547bff689f7"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.828176 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2crqc" event={"ID":"eef9ab07-3037-4115-bb8e-954191b169af","Type":"ContainerStarted","Data":"8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.831301 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerStarted","Data":"0fa105059117f2b4c51f1c17146bba198c1ad14ed2d53794274c62ac38095b80"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.833639 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" event={"ID":"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f","Type":"ContainerStarted","Data":"028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.835292 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13fcb7f1-5a0f-427b-a4a4-709553d1c88d","Type":"ContainerStarted","Data":"89acbc3b89babecb84402f3ec55311a2ac1633dd886e5581dfb789b75a401ac3"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.844016 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" event={"ID":"65ae154d-9b35-408c-bcdb-8b9601be71c8","Type":"ContainerStarted","Data":"30bedbc0bc93f8ca5f3511d1081097f8182d9fc6d6457e41dfa4a6a23655328a"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.853935 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerStarted","Data":"9a2f240b5615f7e4a96ac0c8a498b92dc644dc9f81d75537df39e3b9f01f9020"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.864414 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2crqc" podStartSLOduration=2.864392962 podStartE2EDuration="2.864392962s" podCreationTimestamp="2026-01-29 15:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:51:47.858485999 +0000 UTC m=+1451.531340246" watchObservedRunningTime="2026-01-29 15:51:47.864392962 +0000 UTC m=+1451.537247199" Jan 29 15:51:48 crc kubenswrapper[5008]: I0129 15:51:48.881318 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" event={"ID":"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f","Type":"ContainerStarted","Data":"36c4369212a2c18b6f334f104822d0182e207e44849984ff3689c410393720c8"} Jan 29 15:51:48 crc kubenswrapper[5008]: I0129 15:51:48.886801 5008 generic.go:334] "Generic (PLEG): container finished" podID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerID="60289a7b443137e8ea46321b53a131c528f20b282f9018e51ed60f8d48fdfbaa" exitCode=0 Jan 29 15:51:48 crc kubenswrapper[5008]: I0129 15:51:48.887089 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" event={"ID":"65ae154d-9b35-408c-bcdb-8b9601be71c8","Type":"ContainerDied","Data":"60289a7b443137e8ea46321b53a131c528f20b282f9018e51ed60f8d48fdfbaa"} Jan 29 15:51:48 crc kubenswrapper[5008]: I0129 15:51:48.914325 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" podStartSLOduration=2.91430617 podStartE2EDuration="2.91430617s" podCreationTimestamp="2026-01-29 15:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:51:48.910263662 +0000 UTC m=+1452.583117909" watchObservedRunningTime="2026-01-29 15:51:48.91430617 +0000 UTC m=+1452.587160407" Jan 29 15:51:49 crc kubenswrapper[5008]: I0129 15:51:49.623834 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:51:49 crc kubenswrapper[5008]: I0129 15:51:49.632149 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:52 crc kubenswrapper[5008]: E0129 15:51:52.930288 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache]" Jan 29 15:51:53 crc kubenswrapper[5008]: E0129 15:51:53.416870 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:51:53 crc kubenswrapper[5008]: E0129 15:51:53.417579 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vwdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:51:53 crc kubenswrapper[5008]: E0129 15:51:53.421345 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.937488 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerStarted","Data":"3ff835b6bd219620556e6fd30136d1a1bc1bed3536d7cb9120523837f6c21c9e"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.937543 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerStarted","Data":"445f7efc7b26cc5d17d632da559c671e24b9c9c10f2a6700aafd3aa57f34f1c0"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.937594 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-log" containerID="cri-o://445f7efc7b26cc5d17d632da559c671e24b9c9c10f2a6700aafd3aa57f34f1c0" gracePeriod=30 Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.937611 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-metadata" containerID="cri-o://3ff835b6bd219620556e6fd30136d1a1bc1bed3536d7cb9120523837f6c21c9e" gracePeriod=30 Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.940646 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f0bf87f-118b-4ad5-8354-688ae93d75e8","Type":"ContainerStarted","Data":"a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.945270 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerStarted","Data":"6577ef7af46ac87bbeb2eb62d4d6f390b86ce894a2b7eb71d0570cec11f0f60f"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.945335 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerStarted","Data":"2d137f6ab32493e4c84e12dddea0af4d07130b45d33ad383089e874020edd1c9"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.947529 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13fcb7f1-5a0f-427b-a4a4-709553d1c88d","Type":"ContainerStarted","Data":"85b97eeb8fe553ff723bb92561ee6bde7c6975de4cf810b074233430e415f498"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.947556 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://85b97eeb8fe553ff723bb92561ee6bde7c6975de4cf810b074233430e415f498" gracePeriod=30 Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.949678 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerStarted","Data":"b479429d051c9958a13fa2ef70a2c32999364b6d9f8db133530497550bd940a4"} Jan 29 15:51:53 crc kubenswrapper[5008]: E0129 15:51:53.952170 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.953398 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" event={"ID":"65ae154d-9b35-408c-bcdb-8b9601be71c8","Type":"ContainerStarted","Data":"1d607350ffbc24ef275435eb4ae5dec525e6f42db8162f7bae09094480df98a3"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.954026 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.969532 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.329968576 podStartE2EDuration="7.969514845s" podCreationTimestamp="2026-01-29 15:51:46 +0000 UTC" firstStartedPulling="2026-01-29 15:51:47.235279552 +0000 UTC m=+1450.908133789" lastFinishedPulling="2026-01-29 15:51:52.874825781 +0000 UTC m=+1456.547680058" observedRunningTime="2026-01-29 15:51:53.961955561 +0000 UTC m=+1457.634809798" watchObservedRunningTime="2026-01-29 15:51:53.969514845 +0000 UTC m=+1457.642369092" Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.984019 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.207788502 podStartE2EDuration="8.984000346s" podCreationTimestamp="2026-01-29 15:51:45 +0000 UTC" firstStartedPulling="2026-01-29 15:51:47.098611547 +0000 UTC m=+1450.771465774" lastFinishedPulling="2026-01-29 15:51:52.874823381 +0000 UTC m=+1456.547677618" observedRunningTime="2026-01-29 15:51:53.981243989 +0000 UTC m=+1457.654098246" watchObservedRunningTime="2026-01-29 15:51:53.984000346 +0000 UTC m=+1457.656854603" Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.026366 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.136945453 podStartE2EDuration="9.026342583s" podCreationTimestamp="2026-01-29 15:51:45 +0000 UTC" firstStartedPulling="2026-01-29 15:51:47.019139889 +0000 UTC m=+1450.691994126" lastFinishedPulling="2026-01-29 15:51:52.908537009 +0000 UTC m=+1456.581391256" observedRunningTime="2026-01-29 15:51:54.015621574 +0000 UTC m=+1457.688475811" watchObservedRunningTime="2026-01-29 15:51:54.026342583 +0000 UTC m=+1457.699196830" Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.042908 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.597022364 podStartE2EDuration="8.042893395s" podCreationTimestamp="2026-01-29 15:51:46 +0000 UTC" firstStartedPulling="2026-01-29 15:51:47.42896244 +0000 UTC m=+1451.101816677" lastFinishedPulling="2026-01-29 15:51:52.874833461 +0000 UTC m=+1456.547687708" observedRunningTime="2026-01-29 15:51:54.036436019 +0000 UTC m=+1457.709290276" watchObservedRunningTime="2026-01-29 15:51:54.042893395 +0000 UTC m=+1457.715747632" Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.059087 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" podStartSLOduration=8.059066817 podStartE2EDuration="8.059066817s" podCreationTimestamp="2026-01-29 15:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:51:54.053170575 +0000 UTC m=+1457.726024812" watchObservedRunningTime="2026-01-29 15:51:54.059066817 +0000 UTC m=+1457.731921064" Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.977599 5008 generic.go:334] "Generic (PLEG): container finished" podID="eef9ab07-3037-4115-bb8e-954191b169af" containerID="89a0838edd76e8e3384f319feeb4aa997d5c03e52a3680d202106547bff689f7" exitCode=0 Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.977688 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2crqc" event={"ID":"eef9ab07-3037-4115-bb8e-954191b169af","Type":"ContainerDied","Data":"89a0838edd76e8e3384f319feeb4aa997d5c03e52a3680d202106547bff689f7"} Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.982309 5008 generic.go:334] "Generic (PLEG): container finished" podID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerID="3ff835b6bd219620556e6fd30136d1a1bc1bed3536d7cb9120523837f6c21c9e" exitCode=0 Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.982342 5008 generic.go:334] "Generic (PLEG): container finished" podID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerID="445f7efc7b26cc5d17d632da559c671e24b9c9c10f2a6700aafd3aa57f34f1c0" exitCode=143 Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.982401 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerDied","Data":"3ff835b6bd219620556e6fd30136d1a1bc1bed3536d7cb9120523837f6c21c9e"} Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.982436 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerDied","Data":"445f7efc7b26cc5d17d632da559c671e24b9c9c10f2a6700aafd3aa57f34f1c0"} Jan 29 15:51:54 crc kubenswrapper[5008]: E0129 15:51:54.989157 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.292693 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.463326 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data\") pod \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.463431 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zl2n\" (UniqueName: \"kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n\") pod \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.463538 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle\") pod \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.463601 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs\") pod \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.465405 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs" (OuterVolumeSpecName: "logs") pod "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" (UID: "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.470913 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n" (OuterVolumeSpecName: "kube-api-access-2zl2n") pod "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" (UID: "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0"). InnerVolumeSpecName "kube-api-access-2zl2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.507623 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" (UID: "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.525687 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data" (OuterVolumeSpecName: "config-data") pod "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" (UID: "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.566067 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.566111 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zl2n\" (UniqueName: \"kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.566130 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.566144 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.996998 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.996988 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerDied","Data":"9a2f240b5615f7e4a96ac0c8a498b92dc644dc9f81d75537df39e3b9f01f9020"} Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.997092 5008 scope.go:117] "RemoveContainer" containerID="3ff835b6bd219620556e6fd30136d1a1bc1bed3536d7cb9120523837f6c21c9e" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.056801 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.088103 5008 scope.go:117] "RemoveContainer" containerID="445f7efc7b26cc5d17d632da559c671e24b9c9c10f2a6700aafd3aa57f34f1c0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.094042 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.112159 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:56 crc kubenswrapper[5008]: E0129 15:51:56.112553 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-log" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.112571 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-log" Jan 29 15:51:56 crc kubenswrapper[5008]: E0129 15:51:56.112585 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-metadata" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.112593 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-metadata" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.112774 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-log" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.112875 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-metadata" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.114123 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.117148 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.118955 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.145527 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.280286 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdglm\" (UniqueName: \"kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.280330 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.280508 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.280642 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.280714 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.383205 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdglm\" (UniqueName: \"kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.383269 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.383309 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.383353 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.383381 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.384402 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.387596 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.389096 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.389409 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.410397 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdglm\" (UniqueName: \"kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.438385 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.497105 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.497156 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.509280 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.509351 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.565552 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.567641 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.589981 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.689996 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data\") pod \"eef9ab07-3037-4115-bb8e-954191b169af\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.690053 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle\") pod \"eef9ab07-3037-4115-bb8e-954191b169af\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.690122 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts\") pod \"eef9ab07-3037-4115-bb8e-954191b169af\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.690178 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsxq7\" (UniqueName: \"kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7\") pod \"eef9ab07-3037-4115-bb8e-954191b169af\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.697913 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts" (OuterVolumeSpecName: "scripts") pod "eef9ab07-3037-4115-bb8e-954191b169af" (UID: "eef9ab07-3037-4115-bb8e-954191b169af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.697949 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7" (OuterVolumeSpecName: "kube-api-access-zsxq7") pod "eef9ab07-3037-4115-bb8e-954191b169af" (UID: "eef9ab07-3037-4115-bb8e-954191b169af"). InnerVolumeSpecName "kube-api-access-zsxq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.741882 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eef9ab07-3037-4115-bb8e-954191b169af" (UID: "eef9ab07-3037-4115-bb8e-954191b169af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.745913 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data" (OuterVolumeSpecName: "config-data") pod "eef9ab07-3037-4115-bb8e-954191b169af" (UID: "eef9ab07-3037-4115-bb8e-954191b169af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.792708 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.792744 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsxq7\" (UniqueName: \"kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.792758 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.792768 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.963417 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.012247 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2crqc" event={"ID":"eef9ab07-3037-4115-bb8e-954191b169af","Type":"ContainerDied","Data":"8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6"} Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.012634 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.012268 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.015856 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerStarted","Data":"99b77dffe653c476d71bee1455ad5af6222f956467e807cdf199f218d7c28bf8"} Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.066554 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.175255 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.175502 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-log" containerID="cri-o://2d137f6ab32493e4c84e12dddea0af4d07130b45d33ad383089e874020edd1c9" gracePeriod=30 Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.175585 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-api" containerID="cri-o://6577ef7af46ac87bbeb2eb62d4d6f390b86ce894a2b7eb71d0570cec11f0f60f" gracePeriod=30 Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.182822 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.183086 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.198961 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.334480 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" path="/var/lib/kubelet/pods/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0/volumes" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.736939 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.025001 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerStarted","Data":"dac97359f2204de7c90d0583e18d347f2ba09945977d72742713b4c743219109"} Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.025042 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerStarted","Data":"bdd55f3c6f6a7cf5c018fd856f235769dc493feebb4b8884aa1dd17420aa8b21"} Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.025158 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-log" containerID="cri-o://bdd55f3c6f6a7cf5c018fd856f235769dc493feebb4b8884aa1dd17420aa8b21" gracePeriod=30 Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.025675 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-metadata" containerID="cri-o://dac97359f2204de7c90d0583e18d347f2ba09945977d72742713b4c743219109" gracePeriod=30 Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.032519 5008 generic.go:334] "Generic (PLEG): container finished" podID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerID="2d137f6ab32493e4c84e12dddea0af4d07130b45d33ad383089e874020edd1c9" exitCode=143 Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.032836 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerDied","Data":"2d137f6ab32493e4c84e12dddea0af4d07130b45d33ad383089e874020edd1c9"} Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.085849 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.085828305 podStartE2EDuration="2.085828305s" podCreationTimestamp="2026-01-29 15:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:51:58.058083332 +0000 UTC m=+1461.730937619" watchObservedRunningTime="2026-01-29 15:51:58.085828305 +0000 UTC m=+1461.758682562" Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.053923 5008 generic.go:334] "Generic (PLEG): container finished" podID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerID="dac97359f2204de7c90d0583e18d347f2ba09945977d72742713b4c743219109" exitCode=0 Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.054240 5008 generic.go:334] "Generic (PLEG): container finished" podID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerID="bdd55f3c6f6a7cf5c018fd856f235769dc493feebb4b8884aa1dd17420aa8b21" exitCode=143 Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.054018 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerDied","Data":"dac97359f2204de7c90d0583e18d347f2ba09945977d72742713b4c743219109"} Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.054365 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerDied","Data":"bdd55f3c6f6a7cf5c018fd856f235769dc493feebb4b8884aa1dd17420aa8b21"} Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.054416 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerName="nova-scheduler-scheduler" containerID="cri-o://a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" gracePeriod=30 Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.925660 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.052941 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data\") pod \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053009 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle\") pod \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053117 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs\") pod \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053159 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs\") pod \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053208 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdglm\" (UniqueName: \"kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm\") pod \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053616 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs" (OuterVolumeSpecName: "logs") pod "9e359ccb-1739-4978-b6d7-cc9c22ba4bad" (UID: "9e359ccb-1739-4978-b6d7-cc9c22ba4bad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053803 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.059040 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm" (OuterVolumeSpecName: "kube-api-access-mdglm") pod "9e359ccb-1739-4978-b6d7-cc9c22ba4bad" (UID: "9e359ccb-1739-4978-b6d7-cc9c22ba4bad"). InnerVolumeSpecName "kube-api-access-mdglm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.069305 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerDied","Data":"99b77dffe653c476d71bee1455ad5af6222f956467e807cdf199f218d7c28bf8"} Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.069364 5008 scope.go:117] "RemoveContainer" containerID="dac97359f2204de7c90d0583e18d347f2ba09945977d72742713b4c743219109" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.069380 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.086555 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data" (OuterVolumeSpecName: "config-data") pod "9e359ccb-1739-4978-b6d7-cc9c22ba4bad" (UID: "9e359ccb-1739-4978-b6d7-cc9c22ba4bad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.102762 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e359ccb-1739-4978-b6d7-cc9c22ba4bad" (UID: "9e359ccb-1739-4978-b6d7-cc9c22ba4bad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.109156 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9e359ccb-1739-4978-b6d7-cc9c22ba4bad" (UID: "9e359ccb-1739-4978-b6d7-cc9c22ba4bad"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.155450 5008 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.155484 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdglm\" (UniqueName: \"kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.155494 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.155505 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.183943 5008 scope.go:117] "RemoveContainer" containerID="bdd55f3c6f6a7cf5c018fd856f235769dc493feebb4b8884aa1dd17420aa8b21" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.414955 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.424807 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.439237 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:00 crc kubenswrapper[5008]: E0129 15:52:00.439777 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef9ab07-3037-4115-bb8e-954191b169af" containerName="nova-manage" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.439958 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef9ab07-3037-4115-bb8e-954191b169af" containerName="nova-manage" Jan 29 15:52:00 crc kubenswrapper[5008]: E0129 15:52:00.439985 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-metadata" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.440000 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-metadata" Jan 29 15:52:00 crc kubenswrapper[5008]: E0129 15:52:00.440037 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-log" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.440061 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-log" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.440831 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-metadata" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.440871 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-log" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.440917 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef9ab07-3037-4115-bb8e-954191b169af" containerName="nova-manage" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.442188 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.444288 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.444349 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.467330 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.562935 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.562990 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fvq\" (UniqueName: \"kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.563062 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.563114 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.563189 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.664746 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.664958 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.664989 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5fvq\" (UniqueName: \"kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.665026 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.665070 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.666325 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.669084 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.669675 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.670561 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.683751 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5fvq\" (UniqueName: \"kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.778124 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.108395 5008 generic.go:334] "Generic (PLEG): container finished" podID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerID="a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" exitCode=0 Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.108717 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f0bf87f-118b-4ad5-8354-688ae93d75e8","Type":"ContainerDied","Data":"a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50"} Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.262196 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.338000 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" path="/var/lib/kubelet/pods/9e359ccb-1739-4978-b6d7-cc9c22ba4bad/volumes" Jan 29 15:52:01 crc kubenswrapper[5008]: E0129 15:52:01.510423 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50 is running failed: container process not found" containerID="a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:01 crc kubenswrapper[5008]: E0129 15:52:01.517344 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50 is running failed: container process not found" containerID="a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:01 crc kubenswrapper[5008]: E0129 15:52:01.517683 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50 is running failed: container process not found" containerID="a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:01 crc kubenswrapper[5008]: E0129 15:52:01.517712 5008 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerName="nova-scheduler-scheduler" Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.627978 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.718591 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.718871 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="dnsmasq-dns" containerID="cri-o://517994ddf8724b531c045e361104301810488aaea5740758e3935f990fbe3040" gracePeriod=10 Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.017250 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.125495 5008 generic.go:334] "Generic (PLEG): container finished" podID="35979baf-dba0-453c-bafd-16985d082448" containerID="517994ddf8724b531c045e361104301810488aaea5740758e3935f990fbe3040" exitCode=0 Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.125583 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" event={"ID":"35979baf-dba0-453c-bafd-16985d082448","Type":"ContainerDied","Data":"517994ddf8724b531c045e361104301810488aaea5740758e3935f990fbe3040"} Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.128101 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerStarted","Data":"951b0f36fd6a684d8c30fa21487872b1f27e31c08947dd98a725b29af452b297"} Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.128174 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerStarted","Data":"b1cb4fe0e965ed395741ca05d4744c778b350ee5b58ae99ed0af4f4789b2408e"} Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.128186 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerStarted","Data":"f54ae340e3e9e95461e8dd7339317d96f2c608cdca914d4ca65b81b43814916d"} Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.129696 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f0bf87f-118b-4ad5-8354-688ae93d75e8","Type":"ContainerDied","Data":"868f4c6e442b8edd70fd72637691064134ed05f40e47973b7eb3e61bb8292d33"} Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.129740 5008 scope.go:117] "RemoveContainer" containerID="a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.129860 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.164309 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.164286466 podStartE2EDuration="2.164286466s" podCreationTimestamp="2026-01-29 15:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:02.153557376 +0000 UTC m=+1465.826411623" watchObservedRunningTime="2026-01-29 15:52:02.164286466 +0000 UTC m=+1465.837140703" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.194581 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle\") pod \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.194822 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data\") pod \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.194888 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4pfj\" (UniqueName: \"kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj\") pod \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.199901 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj" (OuterVolumeSpecName: "kube-api-access-h4pfj") pod "1f0bf87f-118b-4ad5-8354-688ae93d75e8" (UID: "1f0bf87f-118b-4ad5-8354-688ae93d75e8"). InnerVolumeSpecName "kube-api-access-h4pfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.225275 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f0bf87f-118b-4ad5-8354-688ae93d75e8" (UID: "1f0bf87f-118b-4ad5-8354-688ae93d75e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.228405 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data" (OuterVolumeSpecName: "config-data") pod "1f0bf87f-118b-4ad5-8354-688ae93d75e8" (UID: "1f0bf87f-118b-4ad5-8354-688ae93d75e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.297283 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.297731 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4pfj\" (UniqueName: \"kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.297744 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.467876 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.478124 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.489661 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:02 crc kubenswrapper[5008]: E0129 15:52:02.490165 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerName="nova-scheduler-scheduler" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.490197 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerName="nova-scheduler-scheduler" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.490429 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerName="nova-scheduler-scheduler" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.491196 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.494352 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.526485 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.603702 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.603808 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr65m\" (UniqueName: \"kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.603864 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.705353 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.705520 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.705592 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr65m\" (UniqueName: \"kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.711388 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.712655 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.728388 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr65m\" (UniqueName: \"kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.787971 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.828031 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.908860 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.908922 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4hcq\" (UniqueName: \"kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.908964 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.909004 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.909022 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.909070 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.924225 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq" (OuterVolumeSpecName: "kube-api-access-w4hcq") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "kube-api-access-w4hcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.959526 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.977438 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config" (OuterVolumeSpecName: "config") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.985810 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.005231 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.013353 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4hcq\" (UniqueName: \"kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.013402 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.013415 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.013427 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.013441 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.022015 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.115116 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.142023 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" event={"ID":"35979baf-dba0-453c-bafd-16985d082448","Type":"ContainerDied","Data":"3e9db3acbe84cb18dcd650ffdeedfffc3c78951f208824646557062d45cea8c7"} Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.142083 5008 scope.go:117] "RemoveContainer" containerID="517994ddf8724b531c045e361104301810488aaea5740758e3935f990fbe3040" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.142106 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.181542 5008 scope.go:117] "RemoveContainer" containerID="054e6e3ef42c95903f288b4bdf317b2b2caa13f9aeb23d4a04ff1cd84e828a41" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.182863 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:52:03 crc kubenswrapper[5008]: E0129 15:52:03.186014 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache]" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.191539 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.312233 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:03 crc kubenswrapper[5008]: W0129 15:52:03.319657 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fb31b59_3f31_4c28_ab5c_e2248ed9fd68.slice/crio-389259437b307b4cfc4471206316ecc9ba9f12cd3bf0806c91536ddba10b92db WatchSource:0}: Error finding container 389259437b307b4cfc4471206316ecc9ba9f12cd3bf0806c91536ddba10b92db: Status 404 returned error can't find the container with id 389259437b307b4cfc4471206316ecc9ba9f12cd3bf0806c91536ddba10b92db Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.346211 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" path="/var/lib/kubelet/pods/1f0bf87f-118b-4ad5-8354-688ae93d75e8/volumes" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.347086 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35979baf-dba0-453c-bafd-16985d082448" path="/var/lib/kubelet/pods/35979baf-dba0-453c-bafd-16985d082448/volumes" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.215082 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68","Type":"ContainerStarted","Data":"17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768"} Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.215540 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68","Type":"ContainerStarted","Data":"389259437b307b4cfc4471206316ecc9ba9f12cd3bf0806c91536ddba10b92db"} Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.241071 5008 generic.go:334] "Generic (PLEG): container finished" podID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerID="6577ef7af46ac87bbeb2eb62d4d6f390b86ce894a2b7eb71d0570cec11f0f60f" exitCode=0 Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.241142 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerDied","Data":"6577ef7af46ac87bbeb2eb62d4d6f390b86ce894a2b7eb71d0570cec11f0f60f"} Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.241990 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.241974705 podStartE2EDuration="2.241974705s" podCreationTimestamp="2026-01-29 15:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:04.237632659 +0000 UTC m=+1467.910486916" watchObservedRunningTime="2026-01-29 15:52:04.241974705 +0000 UTC m=+1467.914828942" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.641709 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.742370 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6cf4\" (UniqueName: \"kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4\") pod \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.742437 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data\") pod \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.742484 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle\") pod \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.742699 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs\") pod \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.743825 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs" (OuterVolumeSpecName: "logs") pod "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" (UID: "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.750840 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4" (OuterVolumeSpecName: "kube-api-access-f6cf4") pod "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" (UID: "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c"). InnerVolumeSpecName "kube-api-access-f6cf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.777239 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" (UID: "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.777871 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data" (OuterVolumeSpecName: "config-data") pod "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" (UID: "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.845381 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.845438 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6cf4\" (UniqueName: \"kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.845453 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.845467 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.259701 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.260310 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerDied","Data":"0fa105059117f2b4c51f1c17146bba198c1ad14ed2d53794274c62ac38095b80"} Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.260354 5008 scope.go:117] "RemoveContainer" containerID="6577ef7af46ac87bbeb2eb62d4d6f390b86ce894a2b7eb71d0570cec11f0f60f" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.286405 5008 scope.go:117] "RemoveContainer" containerID="2d137f6ab32493e4c84e12dddea0af4d07130b45d33ad383089e874020edd1c9" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.342493 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.342533 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.357905 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:05 crc kubenswrapper[5008]: E0129 15:52:05.358312 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-api" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358336 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-api" Jan 29 15:52:05 crc kubenswrapper[5008]: E0129 15:52:05.358352 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="dnsmasq-dns" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358360 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="dnsmasq-dns" Jan 29 15:52:05 crc kubenswrapper[5008]: E0129 15:52:05.358377 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-log" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358384 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-log" Jan 29 15:52:05 crc kubenswrapper[5008]: E0129 15:52:05.358396 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="init" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358403 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="init" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358584 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="dnsmasq-dns" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358596 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-api" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358614 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-log" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.359539 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.364562 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.365463 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.455229 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzzn\" (UniqueName: \"kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.456028 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.456133 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.456209 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.558079 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzzn\" (UniqueName: \"kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.558261 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.558299 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.558339 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.559065 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.563991 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.564151 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.576855 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzzn\" (UniqueName: \"kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.678749 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.779077 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.779807 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 15:52:06 crc kubenswrapper[5008]: I0129 15:52:06.202759 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:06 crc kubenswrapper[5008]: W0129 15:52:06.206190 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd2d95c_747e_4f68_9eca_436834c87a96.slice/crio-fffdd9b250912494bb4bca4bfd92b94d0781e1cf0cbf079d1c4fe2bc1d2f70ff WatchSource:0}: Error finding container fffdd9b250912494bb4bca4bfd92b94d0781e1cf0cbf079d1c4fe2bc1d2f70ff: Status 404 returned error can't find the container with id fffdd9b250912494bb4bca4bfd92b94d0781e1cf0cbf079d1c4fe2bc1d2f70ff Jan 29 15:52:06 crc kubenswrapper[5008]: I0129 15:52:06.271543 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerStarted","Data":"fffdd9b250912494bb4bca4bfd92b94d0781e1cf0cbf079d1c4fe2bc1d2f70ff"} Jan 29 15:52:07 crc kubenswrapper[5008]: I0129 15:52:07.285978 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerStarted","Data":"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c"} Jan 29 15:52:07 crc kubenswrapper[5008]: I0129 15:52:07.286335 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerStarted","Data":"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b"} Jan 29 15:52:07 crc kubenswrapper[5008]: I0129 15:52:07.312328 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.312306551 podStartE2EDuration="2.312306551s" podCreationTimestamp="2026-01-29 15:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:07.306652774 +0000 UTC m=+1470.979507051" watchObservedRunningTime="2026-01-29 15:52:07.312306551 +0000 UTC m=+1470.985160808" Jan 29 15:52:07 crc kubenswrapper[5008]: I0129 15:52:07.342913 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" path="/var/lib/kubelet/pods/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c/volumes" Jan 29 15:52:07 crc kubenswrapper[5008]: I0129 15:52:07.828742 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 15:52:09 crc kubenswrapper[5008]: E0129 15:52:09.451894 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:52:09 crc kubenswrapper[5008]: E0129 15:52:09.452415 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vwdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:52:09 crc kubenswrapper[5008]: E0129 15:52:09.453646 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" Jan 29 15:52:10 crc kubenswrapper[5008]: I0129 15:52:10.779989 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 15:52:10 crc kubenswrapper[5008]: I0129 15:52:10.780058 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 15:52:11 crc kubenswrapper[5008]: I0129 15:52:11.800897 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:11 crc kubenswrapper[5008]: I0129 15:52:11.800957 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:12 crc kubenswrapper[5008]: I0129 15:52:12.329919 5008 generic.go:334] "Generic (PLEG): container finished" podID="a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" containerID="36c4369212a2c18b6f334f104822d0182e207e44849984ff3689c410393720c8" exitCode=0 Jan 29 15:52:12 crc kubenswrapper[5008]: I0129 15:52:12.329954 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" event={"ID":"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f","Type":"ContainerDied","Data":"36c4369212a2c18b6f334f104822d0182e207e44849984ff3689c410393720c8"} Jan 29 15:52:12 crc kubenswrapper[5008]: I0129 15:52:12.828482 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 15:52:12 crc kubenswrapper[5008]: I0129 15:52:12.861287 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.396082 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 15:52:13 crc kubenswrapper[5008]: E0129 15:52:13.400287 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache]" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.779641 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.818387 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data\") pod \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.818553 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts\") pod \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.818680 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle\") pod \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.818721 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84qhc\" (UniqueName: \"kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc\") pod \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.825013 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc" (OuterVolumeSpecName: "kube-api-access-84qhc") pod "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" (UID: "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f"). InnerVolumeSpecName "kube-api-access-84qhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.836059 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts" (OuterVolumeSpecName: "scripts") pod "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" (UID: "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.851416 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data" (OuterVolumeSpecName: "config-data") pod "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" (UID: "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.853148 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" (UID: "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.920755 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.920816 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.920829 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.920843 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84qhc\" (UniqueName: \"kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.990290 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.990361 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.358761 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.358913 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" event={"ID":"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f","Type":"ContainerDied","Data":"028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec"} Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.360122 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.441645 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 15:52:14 crc kubenswrapper[5008]: E0129 15:52:14.442320 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" containerName="nova-cell1-conductor-db-sync" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.442339 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" containerName="nova-cell1-conductor-db-sync" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.442572 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" containerName="nova-cell1-conductor-db-sync" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.443511 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.446425 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.449983 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.450087 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.450192 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm4js\" (UniqueName: \"kubernetes.io/projected/1a40e352-7353-41e6-8c6e-58b7beca8ab9-kube-api-access-qm4js\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.457955 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.550768 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.550888 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm4js\" (UniqueName: \"kubernetes.io/projected/1a40e352-7353-41e6-8c6e-58b7beca8ab9-kube-api-access-qm4js\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.550979 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.571224 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.571979 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm4js\" (UniqueName: \"kubernetes.io/projected/1a40e352-7353-41e6-8c6e-58b7beca8ab9-kube-api-access-qm4js\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.590911 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.772891 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:15 crc kubenswrapper[5008]: I0129 15:52:15.244226 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 15:52:15 crc kubenswrapper[5008]: W0129 15:52:15.275594 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a40e352_7353_41e6_8c6e_58b7beca8ab9.slice/crio-b28ae5b5ff57fbd4e555d2d9db1c5d302ee3406e774dfb7ad2b06776a2585d70 WatchSource:0}: Error finding container b28ae5b5ff57fbd4e555d2d9db1c5d302ee3406e774dfb7ad2b06776a2585d70: Status 404 returned error can't find the container with id b28ae5b5ff57fbd4e555d2d9db1c5d302ee3406e774dfb7ad2b06776a2585d70 Jan 29 15:52:15 crc kubenswrapper[5008]: I0129 15:52:15.369438 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1a40e352-7353-41e6-8c6e-58b7beca8ab9","Type":"ContainerStarted","Data":"b28ae5b5ff57fbd4e555d2d9db1c5d302ee3406e774dfb7ad2b06776a2585d70"} Jan 29 15:52:15 crc kubenswrapper[5008]: I0129 15:52:15.678987 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:52:15 crc kubenswrapper[5008]: I0129 15:52:15.679050 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:52:16 crc kubenswrapper[5008]: I0129 15:52:16.384029 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1a40e352-7353-41e6-8c6e-58b7beca8ab9","Type":"ContainerStarted","Data":"82fdfd42d6fe42d23008b43a8882e8abe9c698de4e1ef0dac6a007e0ec6158c8"} Jan 29 15:52:16 crc kubenswrapper[5008]: I0129 15:52:16.384919 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:16 crc kubenswrapper[5008]: I0129 15:52:16.408205 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.408186471 podStartE2EDuration="2.408186471s" podCreationTimestamp="2026-01-29 15:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:16.406234713 +0000 UTC m=+1480.079089000" watchObservedRunningTime="2026-01-29 15:52:16.408186471 +0000 UTC m=+1480.081040718" Jan 29 15:52:16 crc kubenswrapper[5008]: I0129 15:52:16.721195 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:16 crc kubenswrapper[5008]: I0129 15:52:16.764893 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:20 crc kubenswrapper[5008]: I0129 15:52:20.789758 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 15:52:20 crc kubenswrapper[5008]: I0129 15:52:20.791931 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 15:52:20 crc kubenswrapper[5008]: I0129 15:52:20.805526 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 15:52:21 crc kubenswrapper[5008]: I0129 15:52:21.434265 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 15:52:23 crc kubenswrapper[5008]: E0129 15:52:23.327279 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" Jan 29 15:52:23 crc kubenswrapper[5008]: E0129 15:52:23.628810 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache]" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.468807 5008 generic.go:334] "Generic (PLEG): container finished" podID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" containerID="85b97eeb8fe553ff723bb92561ee6bde7c6975de4cf810b074233430e415f498" exitCode=137 Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.468910 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13fcb7f1-5a0f-427b-a4a4-709553d1c88d","Type":"ContainerDied","Data":"85b97eeb8fe553ff723bb92561ee6bde7c6975de4cf810b074233430e415f498"} Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.627036 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.799417 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.805619 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data\") pod \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.805726 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle\") pod \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.805967 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbfkv\" (UniqueName: \"kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv\") pod \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.811293 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv" (OuterVolumeSpecName: "kube-api-access-cbfkv") pod "13fcb7f1-5a0f-427b-a4a4-709553d1c88d" (UID: "13fcb7f1-5a0f-427b-a4a4-709553d1c88d"). InnerVolumeSpecName "kube-api-access-cbfkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.847577 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data" (OuterVolumeSpecName: "config-data") pod "13fcb7f1-5a0f-427b-a4a4-709553d1c88d" (UID: "13fcb7f1-5a0f-427b-a4a4-709553d1c88d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.849912 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13fcb7f1-5a0f-427b-a4a4-709553d1c88d" (UID: "13fcb7f1-5a0f-427b-a4a4-709553d1c88d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.908545 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbfkv\" (UniqueName: \"kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.908571 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.908580 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.480395 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13fcb7f1-5a0f-427b-a4a4-709553d1c88d","Type":"ContainerDied","Data":"89acbc3b89babecb84402f3ec55311a2ac1633dd886e5581dfb789b75a401ac3"} Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.480469 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.480959 5008 scope.go:117] "RemoveContainer" containerID="85b97eeb8fe553ff723bb92561ee6bde7c6975de4cf810b074233430e415f498" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.527235 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.554959 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.569324 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:52:25 crc kubenswrapper[5008]: E0129 15:52:25.569668 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.569687 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.569983 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.570729 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.572969 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.573351 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.573514 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.577669 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.695854 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.696428 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.696463 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.698847 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.723917 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.724210 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.724337 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.724511 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbrsw\" (UniqueName: \"kubernetes.io/projected/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-kube-api-access-qbrsw\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.724711 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.826637 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbrsw\" (UniqueName: \"kubernetes.io/projected/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-kube-api-access-qbrsw\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.826714 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.826790 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.826857 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.826889 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.832224 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.832466 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.837465 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.838145 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.852883 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbrsw\" (UniqueName: \"kubernetes.io/projected/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-kube-api-access-qbrsw\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.892072 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.441028 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:52:26 crc kubenswrapper[5008]: W0129 15:52:26.456993 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21ca19b4_0317_4b08_8dc2_a4295c2fb8e4.slice/crio-5d67c97d721e1080d30926dadfc79a16d7170f7cfb94187c47909b5c047cbb58 WatchSource:0}: Error finding container 5d67c97d721e1080d30926dadfc79a16d7170f7cfb94187c47909b5c047cbb58: Status 404 returned error can't find the container with id 5d67c97d721e1080d30926dadfc79a16d7170f7cfb94187c47909b5c047cbb58 Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.493708 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4","Type":"ContainerStarted","Data":"5d67c97d721e1080d30926dadfc79a16d7170f7cfb94187c47909b5c047cbb58"} Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.495514 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.516263 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.709952 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-ttnd7"] Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.718235 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.734746 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-ttnd7"] Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849327 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849641 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5brt\" (UniqueName: \"kubernetes.io/projected/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-kube-api-access-f5brt\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849668 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849685 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849767 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-config\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849789 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.951830 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.951882 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5brt\" (UniqueName: \"kubernetes.io/projected/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-kube-api-access-f5brt\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.951901 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.951919 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.951999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-config\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.952016 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.952666 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.952694 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.952753 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.952909 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.953307 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-config\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.970034 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5brt\" (UniqueName: \"kubernetes.io/projected/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-kube-api-access-f5brt\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:27 crc kubenswrapper[5008]: I0129 15:52:27.051117 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:27 crc kubenswrapper[5008]: I0129 15:52:27.344639 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" path="/var/lib/kubelet/pods/13fcb7f1-5a0f-427b-a4a4-709553d1c88d/volumes" Jan 29 15:52:27 crc kubenswrapper[5008]: I0129 15:52:27.503080 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4","Type":"ContainerStarted","Data":"67f16d1b387a0d34b2551b42771ef2767b595fae063dd42beeac6345275b6da4"} Jan 29 15:52:27 crc kubenswrapper[5008]: I0129 15:52:27.527831 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.527810461 podStartE2EDuration="2.527810461s" podCreationTimestamp="2026-01-29 15:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:27.517486674 +0000 UTC m=+1491.190340931" watchObservedRunningTime="2026-01-29 15:52:27.527810461 +0000 UTC m=+1491.200664708" Jan 29 15:52:27 crc kubenswrapper[5008]: I0129 15:52:27.557496 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-ttnd7"] Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.550341 5008 generic.go:334] "Generic (PLEG): container finished" podID="ffdf9dd1-5826-4e41-90ba-770e9ae42cc2" containerID="9123fb9e96d8e10624659bdb5df46afbf9710486281b7894a1e9c73d7a7fa101" exitCode=0 Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.552752 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" event={"ID":"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2","Type":"ContainerDied","Data":"9123fb9e96d8e10624659bdb5df46afbf9710486281b7894a1e9c73d7a7fa101"} Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.552785 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" event={"ID":"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2","Type":"ContainerStarted","Data":"8bb9aecd790e2955eab838b530d9b210e3da5bc976e325cecc92aa2c2f24aa45"} Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.930374 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.930629 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-central-agent" containerID="cri-o://1f0cac0f22132fbe8eb8ceb4b6f38d3eb51e2e56dc4d95059f929e668ed362f6" gracePeriod=30 Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.930705 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="sg-core" containerID="cri-o://b479429d051c9958a13fa2ef70a2c32999364b6d9f8db133530497550bd940a4" gracePeriod=30 Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.930801 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-notification-agent" containerID="cri-o://816da0ccd258b96ae016602b4eb20317eab184c219bbd3b28be883eb79a29a14" gracePeriod=30 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.312193 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.583882 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" event={"ID":"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2","Type":"ContainerStarted","Data":"9a687724e247ca718da90a31ceafd46b7a02908221bfaf3e0c033da3a7d70d68"} Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.585169 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591308 5008 generic.go:334] "Generic (PLEG): container finished" podID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerID="b479429d051c9958a13fa2ef70a2c32999364b6d9f8db133530497550bd940a4" exitCode=2 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591343 5008 generic.go:334] "Generic (PLEG): container finished" podID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerID="816da0ccd258b96ae016602b4eb20317eab184c219bbd3b28be883eb79a29a14" exitCode=0 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591352 5008 generic.go:334] "Generic (PLEG): container finished" podID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerID="1f0cac0f22132fbe8eb8ceb4b6f38d3eb51e2e56dc4d95059f929e668ed362f6" exitCode=0 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591389 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerDied","Data":"b479429d051c9958a13fa2ef70a2c32999364b6d9f8db133530497550bd940a4"} Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591441 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerDied","Data":"816da0ccd258b96ae016602b4eb20317eab184c219bbd3b28be883eb79a29a14"} Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591451 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerDied","Data":"1f0cac0f22132fbe8eb8ceb4b6f38d3eb51e2e56dc4d95059f929e668ed362f6"} Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591551 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-log" containerID="cri-o://3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b" gracePeriod=30 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591649 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-api" containerID="cri-o://8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c" gracePeriod=30 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.615785 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" podStartSLOduration=3.615764107 podStartE2EDuration="3.615764107s" podCreationTimestamp="2026-01-29 15:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:29.614041946 +0000 UTC m=+1493.286896173" watchObservedRunningTime="2026-01-29 15:52:29.615764107 +0000 UTC m=+1493.288618364" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.615354 5008 generic.go:334] "Generic (PLEG): container finished" podID="efd2d95c-747e-4f68-9eca-436834c87a96" containerID="3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b" exitCode=143 Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.615465 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerDied","Data":"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b"} Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.761365 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.893307 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.960997 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961100 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961124 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vwdz\" (UniqueName: \"kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961184 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961253 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961292 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961327 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961572 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961765 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.962101 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.962131 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.967037 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz" (OuterVolumeSpecName: "kube-api-access-5vwdz") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "kube-api-access-5vwdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.968662 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts" (OuterVolumeSpecName: "scripts") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.998924 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.026140 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.028640 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data" (OuterVolumeSpecName: "config-data") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.063954 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.064020 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.064038 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vwdz\" (UniqueName: \"kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.064052 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.064064 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.626917 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerDied","Data":"0c880a32127e0f9cf20872f0cb9c9103c1ec0fcb4e31857d57145ee7e6ef5eff"} Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.626957 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.626987 5008 scope.go:117] "RemoveContainer" containerID="b479429d051c9958a13fa2ef70a2c32999364b6d9f8db133530497550bd940a4" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.654310 5008 scope.go:117] "RemoveContainer" containerID="816da0ccd258b96ae016602b4eb20317eab184c219bbd3b28be883eb79a29a14" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.674066 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.688996 5008 scope.go:117] "RemoveContainer" containerID="1f0cac0f22132fbe8eb8ceb4b6f38d3eb51e2e56dc4d95059f929e668ed362f6" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.712670 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.721156 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:31 crc kubenswrapper[5008]: E0129 15:52:31.721734 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-notification-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.721757 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-notification-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: E0129 15:52:31.721775 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="sg-core" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.721794 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="sg-core" Jan 29 15:52:31 crc kubenswrapper[5008]: E0129 15:52:31.721817 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-central-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.721824 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-central-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.722044 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-notification-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.722080 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="sg-core" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.722098 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-central-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.723954 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.726110 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.726633 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.762692 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.885742 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zk8n\" (UniqueName: \"kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.885982 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.886030 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.886209 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.886247 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.886298 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.886329 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988397 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988451 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988486 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988507 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988529 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zk8n\" (UniqueName: \"kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988570 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988586 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.989488 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.989607 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.993660 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.993696 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.994918 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:32 crc kubenswrapper[5008]: I0129 15:52:32.003676 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:32 crc kubenswrapper[5008]: I0129 15:52:32.006735 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zk8n\" (UniqueName: \"kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:32 crc kubenswrapper[5008]: I0129 15:52:32.056021 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:52:32 crc kubenswrapper[5008]: I0129 15:52:32.535189 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:32 crc kubenswrapper[5008]: I0129 15:52:32.637971 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerStarted","Data":"c0e05b5105ed0e3757d467eff34631c34dcca13e2acddb3cd6556349dd4ddb10"} Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.285087 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.315325 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data\") pod \"efd2d95c-747e-4f68-9eca-436834c87a96\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.315492 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs\") pod \"efd2d95c-747e-4f68-9eca-436834c87a96\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.315519 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmzzn\" (UniqueName: \"kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn\") pod \"efd2d95c-747e-4f68-9eca-436834c87a96\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.315560 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle\") pod \"efd2d95c-747e-4f68-9eca-436834c87a96\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.316160 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs" (OuterVolumeSpecName: "logs") pod "efd2d95c-747e-4f68-9eca-436834c87a96" (UID: "efd2d95c-747e-4f68-9eca-436834c87a96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.347826 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn" (OuterVolumeSpecName: "kube-api-access-kmzzn") pod "efd2d95c-747e-4f68-9eca-436834c87a96" (UID: "efd2d95c-747e-4f68-9eca-436834c87a96"). InnerVolumeSpecName "kube-api-access-kmzzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.353129 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efd2d95c-747e-4f68-9eca-436834c87a96" (UID: "efd2d95c-747e-4f68-9eca-436834c87a96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.356021 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data" (OuterVolumeSpecName: "config-data") pod "efd2d95c-747e-4f68-9eca-436834c87a96" (UID: "efd2d95c-747e-4f68-9eca-436834c87a96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.378381 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" path="/var/lib/kubelet/pods/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7/volumes" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.417491 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.417528 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.417537 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.417546 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmzzn\" (UniqueName: \"kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.649940 5008 generic.go:334] "Generic (PLEG): container finished" podID="efd2d95c-747e-4f68-9eca-436834c87a96" containerID="8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c" exitCode=0 Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.649991 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.649979 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerDied","Data":"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c"} Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.650118 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerDied","Data":"fffdd9b250912494bb4bca4bfd92b94d0781e1cf0cbf079d1c4fe2bc1d2f70ff"} Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.650137 5008 scope.go:117] "RemoveContainer" containerID="8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.679413 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.682803 5008 scope.go:117] "RemoveContainer" containerID="3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.694310 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.703056 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:33 crc kubenswrapper[5008]: E0129 15:52:33.703420 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-log" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.703430 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-log" Jan 29 15:52:33 crc kubenswrapper[5008]: E0129 15:52:33.703456 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-api" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.703463 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-api" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.705124 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-api" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.705158 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-log" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.706131 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.708775 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.709017 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.709152 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723470 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns2nh\" (UniqueName: \"kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723531 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723598 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723617 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723648 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723686 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723763 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.759911 5008 scope.go:117] "RemoveContainer" containerID="8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c" Jan 29 15:52:33 crc kubenswrapper[5008]: E0129 15:52:33.761585 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c\": container with ID starting with 8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c not found: ID does not exist" containerID="8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.761626 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c"} err="failed to get container status \"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c\": rpc error: code = NotFound desc = could not find container \"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c\": container with ID starting with 8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c not found: ID does not exist" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.761657 5008 scope.go:117] "RemoveContainer" containerID="3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b" Jan 29 15:52:33 crc kubenswrapper[5008]: E0129 15:52:33.762075 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b\": container with ID starting with 3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b not found: ID does not exist" containerID="3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.762099 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b"} err="failed to get container status \"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b\": rpc error: code = NotFound desc = could not find container \"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b\": container with ID starting with 3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b not found: ID does not exist" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.824916 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825169 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825193 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825226 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825264 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825315 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns2nh\" (UniqueName: \"kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825522 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.835854 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.837740 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.838535 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.840212 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns2nh\" (UniqueName: \"kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.851469 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: E0129 15:52:33.912097 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache]" Jan 29 15:52:34 crc kubenswrapper[5008]: I0129 15:52:34.066310 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:34 crc kubenswrapper[5008]: I0129 15:52:34.563468 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:34 crc kubenswrapper[5008]: I0129 15:52:34.661970 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerStarted","Data":"cbbd1ae9f5180a48bfb6b0e06422201465dab2f80d3bcb0bb07d69614c78274c"} Jan 29 15:52:34 crc kubenswrapper[5008]: I0129 15:52:34.663561 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerStarted","Data":"f95804822e24c4b9f3caf2c4f8e60772c884987c449b1013ddd08314002b1592"} Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.336549 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" path="/var/lib/kubelet/pods/efd2d95c-747e-4f68-9eca-436834c87a96/volumes" Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.677400 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerStarted","Data":"c4722e08cd543a7198136070e2b6ad5db84511db8bbbbb4f4cc49e9edd0c3d33"} Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.680777 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerStarted","Data":"f79f38ff0afa3885296e624a49ae42810a26d27a384ceccb3214269c19350348"} Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.680900 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerStarted","Data":"bcb62e0a30103f70c2e23448f433250c8f5931d78a534a384a1188d58be16119"} Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.709470 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.709433789 podStartE2EDuration="2.709433789s" podCreationTimestamp="2026-01-29 15:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:35.705054094 +0000 UTC m=+1499.377908391" watchObservedRunningTime="2026-01-29 15:52:35.709433789 +0000 UTC m=+1499.382288056" Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.892931 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.915675 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.703035 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.934967 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-k4msd"] Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.936185 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.946333 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.953154 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.962340 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k4msd"] Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.984912 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kn6j\" (UniqueName: \"kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.984982 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.985055 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.985103 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.053473 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.086360 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.086475 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.086565 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kn6j\" (UniqueName: \"kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.086601 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.093684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.093849 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.110127 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.167221 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kn6j\" (UniqueName: \"kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.210200 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.210448 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="dnsmasq-dns" containerID="cri-o://1d607350ffbc24ef275435eb4ae5dec525e6f42db8162f7bae09094480df98a3" gracePeriod=10 Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.254244 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.703238 5008 generic.go:334] "Generic (PLEG): container finished" podID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerID="1d607350ffbc24ef275435eb4ae5dec525e6f42db8162f7bae09094480df98a3" exitCode=0 Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.703435 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" event={"ID":"65ae154d-9b35-408c-bcdb-8b9601be71c8","Type":"ContainerDied","Data":"1d607350ffbc24ef275435eb4ae5dec525e6f42db8162f7bae09094480df98a3"} Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.828387 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k4msd"] Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.504329 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:52:38 crc kubenswrapper[5008]: E0129 15:52:38.549358 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:52:38 crc kubenswrapper[5008]: E0129 15:52:38.549579 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:52:38 crc kubenswrapper[5008]: E0129 15:52:38.550810 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.571726 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.571813 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.571976 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.572073 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.572106 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.572185 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2ns2\" (UniqueName: \"kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.590418 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2" (OuterVolumeSpecName: "kube-api-access-c2ns2") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "kube-api-access-c2ns2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.645525 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config" (OuterVolumeSpecName: "config") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.652277 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.658287 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.663003 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.672336 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673731 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673772 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673802 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2ns2\" (UniqueName: \"kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673815 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673825 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673844 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.711911 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" event={"ID":"65ae154d-9b35-408c-bcdb-8b9601be71c8","Type":"ContainerDied","Data":"30bedbc0bc93f8ca5f3511d1081097f8182d9fc6d6457e41dfa4a6a23655328a"} Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.711967 5008 scope.go:117] "RemoveContainer" containerID="1d607350ffbc24ef275435eb4ae5dec525e6f42db8162f7bae09094480df98a3" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.712105 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.718976 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k4msd" event={"ID":"dfacde84-7d28-464b-8854-622fd127956c","Type":"ContainerStarted","Data":"0bd2718859e8227e4d8612c327ecd5f34368bcc87d5e43cf15084febf3a519cd"} Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.719006 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k4msd" event={"ID":"dfacde84-7d28-464b-8854-622fd127956c","Type":"ContainerStarted","Data":"60f6f2d51c09764cec2183e64ffad97ab37cff7efd3bc98a46ccc51e42738f09"} Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.722665 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerStarted","Data":"94c1a4df24e57801e6f811a20fbda55d2b2aa44f90464614f709fcc1c7771571"} Jan 29 15:52:38 crc kubenswrapper[5008]: E0129 15:52:38.724998 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.736549 5008 scope.go:117] "RemoveContainer" containerID="60289a7b443137e8ea46321b53a131c528f20b282f9018e51ed60f8d48fdfbaa" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.756375 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-k4msd" podStartSLOduration=2.756358693 podStartE2EDuration="2.756358693s" podCreationTimestamp="2026-01-29 15:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:38.736096979 +0000 UTC m=+1502.408951226" watchObservedRunningTime="2026-01-29 15:52:38.756358693 +0000 UTC m=+1502.429212930" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.767539 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.777228 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:52:39 crc kubenswrapper[5008]: I0129 15:52:39.334273 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" path="/var/lib/kubelet/pods/65ae154d-9b35-408c-bcdb-8b9601be71c8/volumes" Jan 29 15:52:39 crc kubenswrapper[5008]: E0129 15:52:39.733341 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.791029 5008 generic.go:334] "Generic (PLEG): container finished" podID="dfacde84-7d28-464b-8854-622fd127956c" containerID="0bd2718859e8227e4d8612c327ecd5f34368bcc87d5e43cf15084febf3a519cd" exitCode=0 Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.791078 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k4msd" event={"ID":"dfacde84-7d28-464b-8854-622fd127956c","Type":"ContainerDied","Data":"0bd2718859e8227e4d8612c327ecd5f34368bcc87d5e43cf15084febf3a519cd"} Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.990472 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.990552 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.990618 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.991688 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.991815 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c" gracePeriod=600 Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.066923 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.067000 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.815203 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c" exitCode=0 Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.815426 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c"} Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.815550 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19"} Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.815588 5008 scope.go:117] "RemoveContainer" containerID="afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.094923 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.095277 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.241439 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.295643 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts\") pod \"dfacde84-7d28-464b-8854-622fd127956c\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.295815 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data\") pod \"dfacde84-7d28-464b-8854-622fd127956c\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.295891 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kn6j\" (UniqueName: \"kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j\") pod \"dfacde84-7d28-464b-8854-622fd127956c\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.295940 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle\") pod \"dfacde84-7d28-464b-8854-622fd127956c\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.311963 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j" (OuterVolumeSpecName: "kube-api-access-4kn6j") pod "dfacde84-7d28-464b-8854-622fd127956c" (UID: "dfacde84-7d28-464b-8854-622fd127956c"). InnerVolumeSpecName "kube-api-access-4kn6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.316283 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts" (OuterVolumeSpecName: "scripts") pod "dfacde84-7d28-464b-8854-622fd127956c" (UID: "dfacde84-7d28-464b-8854-622fd127956c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.349582 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfacde84-7d28-464b-8854-622fd127956c" (UID: "dfacde84-7d28-464b-8854-622fd127956c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.353416 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data" (OuterVolumeSpecName: "config-data") pod "dfacde84-7d28-464b-8854-622fd127956c" (UID: "dfacde84-7d28-464b-8854-622fd127956c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.398873 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.399062 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kn6j\" (UniqueName: \"kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.399168 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.399267 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.826757 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k4msd" event={"ID":"dfacde84-7d28-464b-8854-622fd127956c","Type":"ContainerDied","Data":"60f6f2d51c09764cec2183e64ffad97ab37cff7efd3bc98a46ccc51e42738f09"} Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.826811 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.826820 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60f6f2d51c09764cec2183e64ffad97ab37cff7efd3bc98a46ccc51e42738f09" Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.025688 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.026327 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerName="nova-scheduler-scheduler" containerID="cri-o://17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" gracePeriod=30 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.045066 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.045385 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-log" containerID="cri-o://bcb62e0a30103f70c2e23448f433250c8f5931d78a534a384a1188d58be16119" gracePeriod=30 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.046377 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-api" containerID="cri-o://f79f38ff0afa3885296e624a49ae42810a26d27a384ceccb3214269c19350348" gracePeriod=30 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.194913 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.195203 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-log" containerID="cri-o://b1cb4fe0e965ed395741ca05d4744c778b350ee5b58ae99ed0af4f4789b2408e" gracePeriod=30 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.195248 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-metadata" containerID="cri-o://951b0f36fd6a684d8c30fa21487872b1f27e31c08947dd98a725b29af452b297" gracePeriod=30 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.844175 5008 generic.go:334] "Generic (PLEG): container finished" podID="038b9a46-5128-497b-8073-557e8f3542fb" containerID="b1cb4fe0e965ed395741ca05d4744c778b350ee5b58ae99ed0af4f4789b2408e" exitCode=143 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.844247 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerDied","Data":"b1cb4fe0e965ed395741ca05d4744c778b350ee5b58ae99ed0af4f4789b2408e"} Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.846720 5008 generic.go:334] "Generic (PLEG): container finished" podID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerID="bcb62e0a30103f70c2e23448f433250c8f5931d78a534a384a1188d58be16119" exitCode=143 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.846749 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerDied","Data":"bcb62e0a30103f70c2e23448f433250c8f5931d78a534a384a1188d58be16119"} Jan 29 15:52:47 crc kubenswrapper[5008]: E0129 15:52:47.829459 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768 is running failed: container process not found" containerID="17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:47 crc kubenswrapper[5008]: E0129 15:52:47.830534 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768 is running failed: container process not found" containerID="17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:47 crc kubenswrapper[5008]: E0129 15:52:47.831062 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768 is running failed: container process not found" containerID="17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:47 crc kubenswrapper[5008]: E0129 15:52:47.831123 5008 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerName="nova-scheduler-scheduler" Jan 29 15:52:48 crc kubenswrapper[5008]: I0129 15:52:48.910238 5008 generic.go:334] "Generic (PLEG): container finished" podID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerID="17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" exitCode=0 Jan 29 15:52:48 crc kubenswrapper[5008]: I0129 15:52:48.910314 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68","Type":"ContainerDied","Data":"17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768"} Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.558816 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.584338 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle\") pod \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.584422 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr65m\" (UniqueName: \"kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m\") pod \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.614897 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m" (OuterVolumeSpecName: "kube-api-access-fr65m") pod "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" (UID: "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68"). InnerVolumeSpecName "kube-api-access-fr65m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.621385 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" (UID: "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.685809 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data\") pod \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.686084 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.686116 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr65m\" (UniqueName: \"kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.712347 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data" (OuterVolumeSpecName: "config-data") pod "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" (UID: "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.787986 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.921152 5008 generic.go:334] "Generic (PLEG): container finished" podID="038b9a46-5128-497b-8073-557e8f3542fb" containerID="951b0f36fd6a684d8c30fa21487872b1f27e31c08947dd98a725b29af452b297" exitCode=0 Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.921234 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerDied","Data":"951b0f36fd6a684d8c30fa21487872b1f27e31c08947dd98a725b29af452b297"} Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.924299 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68","Type":"ContainerDied","Data":"389259437b307b4cfc4471206316ecc9ba9f12cd3bf0806c91536ddba10b92db"} Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.924459 5008 scope.go:117] "RemoveContainer" containerID="17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.924346 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.960240 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.968577 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993104 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:49 crc kubenswrapper[5008]: E0129 15:52:49.993483 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfacde84-7d28-464b-8854-622fd127956c" containerName="nova-manage" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993501 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfacde84-7d28-464b-8854-622fd127956c" containerName="nova-manage" Jan 29 15:52:49 crc kubenswrapper[5008]: E0129 15:52:49.993510 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="init" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993516 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="init" Jan 29 15:52:49 crc kubenswrapper[5008]: E0129 15:52:49.993534 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerName="nova-scheduler-scheduler" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993541 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerName="nova-scheduler-scheduler" Jan 29 15:52:49 crc kubenswrapper[5008]: E0129 15:52:49.993565 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="dnsmasq-dns" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993570 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="dnsmasq-dns" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993754 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerName="nova-scheduler-scheduler" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993776 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfacde84-7d28-464b-8854-622fd127956c" containerName="nova-manage" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993789 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="dnsmasq-dns" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.994358 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.995772 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.002983 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.093889 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.094261 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-config-data\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.094294 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-858gf\" (UniqueName: \"kubernetes.io/projected/f6caa062-78b8-42ad-a655-6828f63a7e8f-kube-api-access-858gf\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.195762 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-config-data\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.195846 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-858gf\" (UniqueName: \"kubernetes.io/projected/f6caa062-78b8-42ad-a655-6828f63a7e8f-kube-api-access-858gf\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.195909 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.200823 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-config-data\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.201015 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.212132 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-858gf\" (UniqueName: \"kubernetes.io/projected/f6caa062-78b8-42ad-a655-6828f63a7e8f-kube-api-access-858gf\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.255645 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297043 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5fvq\" (UniqueName: \"kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq\") pod \"038b9a46-5128-497b-8073-557e8f3542fb\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297113 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle\") pod \"038b9a46-5128-497b-8073-557e8f3542fb\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297152 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs\") pod \"038b9a46-5128-497b-8073-557e8f3542fb\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297184 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data\") pod \"038b9a46-5128-497b-8073-557e8f3542fb\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297233 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs\") pod \"038b9a46-5128-497b-8073-557e8f3542fb\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297794 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs" (OuterVolumeSpecName: "logs") pod "038b9a46-5128-497b-8073-557e8f3542fb" (UID: "038b9a46-5128-497b-8073-557e8f3542fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.300023 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq" (OuterVolumeSpecName: "kube-api-access-l5fvq") pod "038b9a46-5128-497b-8073-557e8f3542fb" (UID: "038b9a46-5128-497b-8073-557e8f3542fb"). InnerVolumeSpecName "kube-api-access-l5fvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.319486 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.326764 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data" (OuterVolumeSpecName: "config-data") pod "038b9a46-5128-497b-8073-557e8f3542fb" (UID: "038b9a46-5128-497b-8073-557e8f3542fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.337501 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "038b9a46-5128-497b-8073-557e8f3542fb" (UID: "038b9a46-5128-497b-8073-557e8f3542fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.398890 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5fvq\" (UniqueName: \"kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.398925 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.398935 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.398944 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.412827 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "038b9a46-5128-497b-8073-557e8f3542fb" (UID: "038b9a46-5128-497b-8073-557e8f3542fb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:50 crc kubenswrapper[5008]: E0129 15:52:50.493960 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:52:50 crc kubenswrapper[5008]: E0129 15:52:50.494160 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:52:50 crc kubenswrapper[5008]: E0129 15:52:50.495654 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.501214 5008 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:50 crc kubenswrapper[5008]: W0129 15:52:50.829488 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6caa062_78b8_42ad_a655_6828f63a7e8f.slice/crio-8020386dfa58fb41e827835a90030dcb286ee2dc46d4f86024db8938474553ee WatchSource:0}: Error finding container 8020386dfa58fb41e827835a90030dcb286ee2dc46d4f86024db8938474553ee: Status 404 returned error can't find the container with id 8020386dfa58fb41e827835a90030dcb286ee2dc46d4f86024db8938474553ee Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.831627 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.943712 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6caa062-78b8-42ad-a655-6828f63a7e8f","Type":"ContainerStarted","Data":"8020386dfa58fb41e827835a90030dcb286ee2dc46d4f86024db8938474553ee"} Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.948314 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerDied","Data":"f54ae340e3e9e95461e8dd7339317d96f2c608cdca914d4ca65b81b43814916d"} Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.948376 5008 scope.go:117] "RemoveContainer" containerID="951b0f36fd6a684d8c30fa21487872b1f27e31c08947dd98a725b29af452b297" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.948716 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.961023 5008 generic.go:334] "Generic (PLEG): container finished" podID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerID="f79f38ff0afa3885296e624a49ae42810a26d27a384ceccb3214269c19350348" exitCode=0 Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.961078 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerDied","Data":"f79f38ff0afa3885296e624a49ae42810a26d27a384ceccb3214269c19350348"} Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.961112 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerDied","Data":"f95804822e24c4b9f3caf2c4f8e60772c884987c449b1013ddd08314002b1592"} Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.961128 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f95804822e24c4b9f3caf2c4f8e60772c884987c449b1013ddd08314002b1592" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.970654 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.982109 5008 scope.go:117] "RemoveContainer" containerID="b1cb4fe0e965ed395741ca05d4744c778b350ee5b58ae99ed0af4f4789b2408e" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.018143 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.027727 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.052326 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:51 crc kubenswrapper[5008]: E0129 15:52:51.052803 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-metadata" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.052841 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-metadata" Jan 29 15:52:51 crc kubenswrapper[5008]: E0129 15:52:51.052866 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-log" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.052875 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-log" Jan 29 15:52:51 crc kubenswrapper[5008]: E0129 15:52:51.052890 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-api" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.052898 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-api" Jan 29 15:52:51 crc kubenswrapper[5008]: E0129 15:52:51.052919 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-log" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.052928 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-log" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.053142 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-log" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.053165 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-log" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.053183 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-metadata" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.053198 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-api" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.054363 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.057016 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.058436 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.060958 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.112818 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.112906 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.112946 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns2nh\" (UniqueName: \"kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.112989 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.113032 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.113123 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.113384 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs" (OuterVolumeSpecName: "logs") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.113691 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.116751 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh" (OuterVolumeSpecName: "kube-api-access-ns2nh") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "kube-api-access-ns2nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.139531 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data" (OuterVolumeSpecName: "config-data") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.145607 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.159128 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.171051 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215210 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-config-data\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215400 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4470533-b658-46fe-8749-f371b22703b2-logs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215424 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccwzd\" (UniqueName: \"kubernetes.io/projected/a4470533-b658-46fe-8749-f371b22703b2-kube-api-access-ccwzd\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215447 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215467 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215508 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215520 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns2nh\" (UniqueName: \"kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215530 5008 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215541 5008 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215549 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.317428 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4470533-b658-46fe-8749-f371b22703b2-logs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.317483 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccwzd\" (UniqueName: \"kubernetes.io/projected/a4470533-b658-46fe-8749-f371b22703b2-kube-api-access-ccwzd\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.317509 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.317528 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.317555 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-config-data\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.318369 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4470533-b658-46fe-8749-f371b22703b2-logs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.321630 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.322414 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-config-data\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.322436 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.335161 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="038b9a46-5128-497b-8073-557e8f3542fb" path="/var/lib/kubelet/pods/038b9a46-5128-497b-8073-557e8f3542fb/volumes" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.335725 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" path="/var/lib/kubelet/pods/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68/volumes" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.349679 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccwzd\" (UniqueName: \"kubernetes.io/projected/a4470533-b658-46fe-8749-f371b22703b2-kube-api-access-ccwzd\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.377352 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.863139 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:51 crc kubenswrapper[5008]: W0129 15:52:51.873081 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4470533_b658_46fe_8749_f371b22703b2.slice/crio-db2b948aaf75b189f6aa9901094640e00f4da0eb1988661aafbcf7eb4dd51063 WatchSource:0}: Error finding container db2b948aaf75b189f6aa9901094640e00f4da0eb1988661aafbcf7eb4dd51063: Status 404 returned error can't find the container with id db2b948aaf75b189f6aa9901094640e00f4da0eb1988661aafbcf7eb4dd51063 Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.972124 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6caa062-78b8-42ad-a655-6828f63a7e8f","Type":"ContainerStarted","Data":"12209b0fe0deeb6852eef600b29008eb94a8ee68d3ebcc1d302584865f889359"} Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.977676 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4470533-b658-46fe-8749-f371b22703b2","Type":"ContainerStarted","Data":"db2b948aaf75b189f6aa9901094640e00f4da0eb1988661aafbcf7eb4dd51063"} Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.979177 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.996560 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.996534278 podStartE2EDuration="2.996534278s" podCreationTimestamp="2026-01-29 15:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:51.993001553 +0000 UTC m=+1515.665855800" watchObservedRunningTime="2026-01-29 15:52:51.996534278 +0000 UTC m=+1515.669388515" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.021843 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.031966 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.046057 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.047862 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.050717 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.052078 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.053286 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.056191 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234403 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234721 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-config-data\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234828 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234864 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxfnr\" (UniqueName: \"kubernetes.io/projected/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-kube-api-access-zxfnr\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234894 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-public-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234922 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-logs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336609 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336675 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxfnr\" (UniqueName: \"kubernetes.io/projected/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-kube-api-access-zxfnr\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336707 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-public-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336739 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-logs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336784 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336803 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-config-data\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.337748 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-logs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.341675 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.343330 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-public-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.343614 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-config-data\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.344120 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.354533 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxfnr\" (UniqueName: \"kubernetes.io/projected/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-kube-api-access-zxfnr\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.374185 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.825469 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:52 crc kubenswrapper[5008]: W0129 15:52:52.841857 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffff5fc1_f4be_4fad_bfa8_890ea58d2a00.slice/crio-6505b865f270ca95990e7849ff3eb462da8e847d4c5dea8c3d31acc4aa357430 WatchSource:0}: Error finding container 6505b865f270ca95990e7849ff3eb462da8e847d4c5dea8c3d31acc4aa357430: Status 404 returned error can't find the container with id 6505b865f270ca95990e7849ff3eb462da8e847d4c5dea8c3d31acc4aa357430 Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.987875 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00","Type":"ContainerStarted","Data":"6505b865f270ca95990e7849ff3eb462da8e847d4c5dea8c3d31acc4aa357430"} Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.994290 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4470533-b658-46fe-8749-f371b22703b2","Type":"ContainerStarted","Data":"e3f2b0eb5709a441e3aaf944c5a5bd7f9e69fcf4f51df6104efd6bfbf194d4e5"} Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.994342 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4470533-b658-46fe-8749-f371b22703b2","Type":"ContainerStarted","Data":"c202e9718bd598f0b9777b933a454710906e6dc6c784b287c488720233bc854a"} Jan 29 15:52:53 crc kubenswrapper[5008]: I0129 15:52:53.024880 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.024844895 podStartE2EDuration="3.024844895s" podCreationTimestamp="2026-01-29 15:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:53.013509554 +0000 UTC m=+1516.686363831" watchObservedRunningTime="2026-01-29 15:52:53.024844895 +0000 UTC m=+1516.697699132" Jan 29 15:52:53 crc kubenswrapper[5008]: I0129 15:52:53.336349 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" path="/var/lib/kubelet/pods/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6/volumes" Jan 29 15:52:54 crc kubenswrapper[5008]: I0129 15:52:54.012313 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00","Type":"ContainerStarted","Data":"328db75da770e9e18ad52014a71d74596e17fa2e4fa8662790336cfc18e63783"} Jan 29 15:52:54 crc kubenswrapper[5008]: I0129 15:52:54.012365 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00","Type":"ContainerStarted","Data":"af9398ca33cdf40db434083cb15e2dbcc32be3c6c714d1798ebce279aef34ce5"} Jan 29 15:52:54 crc kubenswrapper[5008]: I0129 15:52:54.048762 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.048744007 podStartE2EDuration="2.048744007s" podCreationTimestamp="2026-01-29 15:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:54.042181641 +0000 UTC m=+1517.715035898" watchObservedRunningTime="2026-01-29 15:52:54.048744007 +0000 UTC m=+1517.721598244" Jan 29 15:52:55 crc kubenswrapper[5008]: I0129 15:52:55.321052 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 15:52:56 crc kubenswrapper[5008]: I0129 15:52:56.378295 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 15:52:56 crc kubenswrapper[5008]: I0129 15:52:56.378765 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 15:53:00 crc kubenswrapper[5008]: I0129 15:53:00.321207 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 15:53:00 crc kubenswrapper[5008]: I0129 15:53:00.349152 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 15:53:01 crc kubenswrapper[5008]: I0129 15:53:01.130492 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 15:53:01 crc kubenswrapper[5008]: I0129 15:53:01.378383 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 15:53:01 crc kubenswrapper[5008]: I0129 15:53:01.378468 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 15:53:02 crc kubenswrapper[5008]: E0129 15:53:02.328243 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:53:02 crc kubenswrapper[5008]: I0129 15:53:02.375041 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:53:02 crc kubenswrapper[5008]: I0129 15:53:02.375089 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:53:02 crc kubenswrapper[5008]: I0129 15:53:02.396956 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a4470533-b658-46fe-8749-f371b22703b2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:53:02 crc kubenswrapper[5008]: I0129 15:53:02.396991 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a4470533-b658-46fe-8749-f371b22703b2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:53:03 crc kubenswrapper[5008]: I0129 15:53:03.457998 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ffff5fc1-f4be-4fad-bfa8-890ea58d2a00" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:53:03 crc kubenswrapper[5008]: I0129 15:53:03.458055 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ffff5fc1-f4be-4fad-bfa8-890ea58d2a00" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:53:11 crc kubenswrapper[5008]: I0129 15:53:11.385710 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 15:53:11 crc kubenswrapper[5008]: I0129 15:53:11.386615 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 15:53:11 crc kubenswrapper[5008]: I0129 15:53:11.395101 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 15:53:11 crc kubenswrapper[5008]: I0129 15:53:11.397546 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.384558 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.385118 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.385567 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.385640 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.397881 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.399924 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 15:53:15 crc kubenswrapper[5008]: E0129 15:53:15.460097 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:53:15 crc kubenswrapper[5008]: E0129 15:53:15.460941 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:53:15 crc kubenswrapper[5008]: E0129 15:53:15.462154 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:53:24 crc kubenswrapper[5008]: I0129 15:53:24.169494 5008 scope.go:117] "RemoveContainer" containerID="1545206f415995f8be0b1d78b3af14329c9b33899a9464b3994d4df802ea1766" Jan 29 15:53:24 crc kubenswrapper[5008]: I0129 15:53:24.210816 5008 scope.go:117] "RemoveContainer" containerID="e93e17f1bada8f9ceb5d734c0b57f087df79c0ad461fa0d4048a7875532ded1d" Jan 29 15:53:29 crc kubenswrapper[5008]: E0129 15:53:29.327700 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:53:43 crc kubenswrapper[5008]: E0129 15:53:43.330546 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:53:56 crc kubenswrapper[5008]: E0129 15:53:56.580611 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:53:56 crc kubenswrapper[5008]: E0129 15:53:56.581167 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:53:56 crc kubenswrapper[5008]: E0129 15:53:56.582392 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:54:08 crc kubenswrapper[5008]: E0129 15:54:08.327402 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:54:23 crc kubenswrapper[5008]: E0129 15:54:23.326711 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:54:24 crc kubenswrapper[5008]: I0129 15:54:24.432620 5008 scope.go:117] "RemoveContainer" containerID="82015428914e1b8d83489174480b3a04643dbd25b377d65c00407eb4dfbc5a91" Jan 29 15:54:38 crc kubenswrapper[5008]: E0129 15:54:38.326990 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:54:49 crc kubenswrapper[5008]: E0129 15:54:49.325532 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:55:01 crc kubenswrapper[5008]: E0129 15:55:01.328123 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:55:13 crc kubenswrapper[5008]: I0129 15:55:13.991339 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:55:13 crc kubenswrapper[5008]: I0129 15:55:13.995040 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:55:15 crc kubenswrapper[5008]: E0129 15:55:15.327727 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:55:30 crc kubenswrapper[5008]: I0129 15:55:30.326887 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:55:30 crc kubenswrapper[5008]: E0129 15:55:30.461769 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:55:30 crc kubenswrapper[5008]: E0129 15:55:30.462003 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:55:30 crc kubenswrapper[5008]: E0129 15:55:30.463220 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:55:42 crc kubenswrapper[5008]: E0129 15:55:42.327394 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:55:43 crc kubenswrapper[5008]: I0129 15:55:43.990271 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:55:43 crc kubenswrapper[5008]: I0129 15:55:43.990675 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:55:57 crc kubenswrapper[5008]: E0129 15:55:57.330674 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:56:10 crc kubenswrapper[5008]: E0129 15:56:10.327043 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:56:13 crc kubenswrapper[5008]: I0129 15:56:13.990896 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:56:13 crc kubenswrapper[5008]: I0129 15:56:13.991275 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:56:13 crc kubenswrapper[5008]: I0129 15:56:13.991333 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:56:13 crc kubenswrapper[5008]: I0129 15:56:13.992347 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:56:13 crc kubenswrapper[5008]: I0129 15:56:13.992448 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" gracePeriod=600 Jan 29 15:56:14 crc kubenswrapper[5008]: I0129 15:56:14.243911 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" exitCode=0 Jan 29 15:56:14 crc kubenswrapper[5008]: I0129 15:56:14.243955 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19"} Jan 29 15:56:14 crc kubenswrapper[5008]: I0129 15:56:14.243988 5008 scope.go:117] "RemoveContainer" containerID="65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c" Jan 29 15:56:14 crc kubenswrapper[5008]: E0129 15:56:14.291850 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:56:15 crc kubenswrapper[5008]: I0129 15:56:15.256960 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:56:15 crc kubenswrapper[5008]: E0129 15:56:15.257595 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:56:23 crc kubenswrapper[5008]: E0129 15:56:23.326971 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:56:30 crc kubenswrapper[5008]: I0129 15:56:30.324177 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:56:30 crc kubenswrapper[5008]: E0129 15:56:30.325139 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:56:36 crc kubenswrapper[5008]: E0129 15:56:36.327723 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.914359 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.916768 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.937830 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.961116 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.961375 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnd9j\" (UniqueName: \"kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.961440 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.063443 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.063555 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.063684 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnd9j\" (UniqueName: \"kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.064053 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.064070 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.090579 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnd9j\" (UniqueName: \"kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.251725 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.323616 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:56:42 crc kubenswrapper[5008]: E0129 15:56:42.324146 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.811820 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:43 crc kubenswrapper[5008]: I0129 15:56:43.507623 5008 generic.go:334] "Generic (PLEG): container finished" podID="91204902-80fb-472a-b67c-1d290bd97368" containerID="7d4815761a9d2f556ee06bbf98cf1b6c8cec425b4632da102c9fe10b76949770" exitCode=0 Jan 29 15:56:43 crc kubenswrapper[5008]: I0129 15:56:43.508172 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerDied","Data":"7d4815761a9d2f556ee06bbf98cf1b6c8cec425b4632da102c9fe10b76949770"} Jan 29 15:56:43 crc kubenswrapper[5008]: I0129 15:56:43.508202 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerStarted","Data":"e4c065692be9ea478648ae1adb8036fa6a548911ddca69f2ffc651d85a0ff9b8"} Jan 29 15:56:46 crc kubenswrapper[5008]: I0129 15:56:46.537115 5008 generic.go:334] "Generic (PLEG): container finished" podID="91204902-80fb-472a-b67c-1d290bd97368" containerID="4c7f1c035bf93e990a09127ab0239b9dd8fb171aad0406e2e4f471771073ce20" exitCode=0 Jan 29 15:56:46 crc kubenswrapper[5008]: I0129 15:56:46.537145 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerDied","Data":"4c7f1c035bf93e990a09127ab0239b9dd8fb171aad0406e2e4f471771073ce20"} Jan 29 15:56:47 crc kubenswrapper[5008]: E0129 15:56:47.335481 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:56:48 crc kubenswrapper[5008]: I0129 15:56:48.557247 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerStarted","Data":"fd5b906760d69a40cedcc9755fc25288bec9129c3fde13b9ce243cf6e009d4c4"} Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.034256 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lc24f" podStartSLOduration=6.081057289 podStartE2EDuration="10.034231822s" podCreationTimestamp="2026-01-29 15:56:41 +0000 UTC" firstStartedPulling="2026-01-29 15:56:43.510142204 +0000 UTC m=+1747.182996441" lastFinishedPulling="2026-01-29 15:56:47.463316737 +0000 UTC m=+1751.136170974" observedRunningTime="2026-01-29 15:56:48.577026083 +0000 UTC m=+1752.249880310" watchObservedRunningTime="2026-01-29 15:56:51.034231822 +0000 UTC m=+1754.707086089" Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.051042 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-pggzk"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.065615 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e4e6-account-create-update-6vxmr"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.073674 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0e02-account-create-update-7n7jw"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.080974 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8tpqs"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.089841 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4a04-account-create-update-2cfml"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.097302 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-pggzk"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.105602 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e4e6-account-create-update-6vxmr"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.112741 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0e02-account-create-update-7n7jw"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.119178 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8tpqs"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.125610 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4a04-account-create-update-2cfml"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.334916 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08da0630-8fe2-4a33-be0c-d81bba67c32c" path="/var/lib/kubelet/pods/08da0630-8fe2-4a33-be0c-d81bba67c32c/volumes" Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.335566 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="232739d0-09f9-4843-8c9f-fc19bc53763f" path="/var/lib/kubelet/pods/232739d0-09f9-4843-8c9f-fc19bc53763f/volumes" Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.336096 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bc21a6-d1eb-4200-add0-523a33ffb2ff" path="/var/lib/kubelet/pods/30bc21a6-d1eb-4200-add0-523a33ffb2ff/volumes" Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.336637 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328d3758-78bd-4a08-b91f-f2f4c9b8b645" path="/var/lib/kubelet/pods/328d3758-78bd-4a08-b91f-f2f4c9b8b645/volumes" Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.337976 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd141cd-e623-4692-892c-cf683275d378" path="/var/lib/kubelet/pods/6fd141cd-e623-4692-892c-cf683275d378/volumes" Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.037541 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rvpz6"] Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.048053 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rvpz6"] Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.252242 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.252302 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.296629 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.646146 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.706773 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:53 crc kubenswrapper[5008]: I0129 15:56:53.333186 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="207579aa-feff-4069-8fcb-02c5b9cd107f" path="/var/lib/kubelet/pods/207579aa-feff-4069-8fcb-02c5b9cd107f/volumes" Jan 29 15:56:54 crc kubenswrapper[5008]: I0129 15:56:54.324445 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:56:54 crc kubenswrapper[5008]: E0129 15:56:54.324766 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:56:54 crc kubenswrapper[5008]: I0129 15:56:54.610528 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lc24f" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="registry-server" containerID="cri-o://fd5b906760d69a40cedcc9755fc25288bec9129c3fde13b9ce243cf6e009d4c4" gracePeriod=2 Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.622909 5008 generic.go:334] "Generic (PLEG): container finished" podID="91204902-80fb-472a-b67c-1d290bd97368" containerID="fd5b906760d69a40cedcc9755fc25288bec9129c3fde13b9ce243cf6e009d4c4" exitCode=0 Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.623019 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerDied","Data":"fd5b906760d69a40cedcc9755fc25288bec9129c3fde13b9ce243cf6e009d4c4"} Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.624418 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerDied","Data":"e4c065692be9ea478648ae1adb8036fa6a548911ddca69f2ffc651d85a0ff9b8"} Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.624506 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4c065692be9ea478648ae1adb8036fa6a548911ddca69f2ffc651d85a0ff9b8" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.655227 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.671350 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnd9j\" (UniqueName: \"kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j\") pod \"91204902-80fb-472a-b67c-1d290bd97368\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.671480 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities\") pod \"91204902-80fb-472a-b67c-1d290bd97368\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.671708 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content\") pod \"91204902-80fb-472a-b67c-1d290bd97368\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.673082 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities" (OuterVolumeSpecName: "utilities") pod "91204902-80fb-472a-b67c-1d290bd97368" (UID: "91204902-80fb-472a-b67c-1d290bd97368"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.685009 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j" (OuterVolumeSpecName: "kube-api-access-vnd9j") pod "91204902-80fb-472a-b67c-1d290bd97368" (UID: "91204902-80fb-472a-b67c-1d290bd97368"). InnerVolumeSpecName "kube-api-access-vnd9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.733496 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91204902-80fb-472a-b67c-1d290bd97368" (UID: "91204902-80fb-472a-b67c-1d290bd97368"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.774536 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnd9j\" (UniqueName: \"kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.774595 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.774609 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:56 crc kubenswrapper[5008]: I0129 15:56:56.632136 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:56 crc kubenswrapper[5008]: I0129 15:56:56.666767 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:56 crc kubenswrapper[5008]: I0129 15:56:56.675948 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:57 crc kubenswrapper[5008]: I0129 15:56:57.342354 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91204902-80fb-472a-b67c-1d290bd97368" path="/var/lib/kubelet/pods/91204902-80fb-472a-b67c-1d290bd97368/volumes" Jan 29 15:56:59 crc kubenswrapper[5008]: I0129 15:56:59.068884 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bxxx2"] Jan 29 15:56:59 crc kubenswrapper[5008]: I0129 15:56:59.091873 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bxxx2"] Jan 29 15:56:59 crc kubenswrapper[5008]: I0129 15:56:59.335156 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98c93f6a-d803-4df3-8b35-191cbe683adf" path="/var/lib/kubelet/pods/98c93f6a-d803-4df3-8b35-191cbe683adf/volumes" Jan 29 15:57:01 crc kubenswrapper[5008]: E0129 15:57:01.326184 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:57:09 crc kubenswrapper[5008]: I0129 15:57:09.323181 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:57:09 crc kubenswrapper[5008]: E0129 15:57:09.323852 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:57:14 crc kubenswrapper[5008]: E0129 15:57:14.326032 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.619013 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:14 crc kubenswrapper[5008]: E0129 15:57:14.620007 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="extract-content" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.620033 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="extract-content" Jan 29 15:57:14 crc kubenswrapper[5008]: E0129 15:57:14.620054 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="extract-utilities" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.620064 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="extract-utilities" Jan 29 15:57:14 crc kubenswrapper[5008]: E0129 15:57:14.620093 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="registry-server" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.620101 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="registry-server" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.620314 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="registry-server" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.622050 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.633625 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.731487 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.731649 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.731754 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s47h5\" (UniqueName: \"kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.833454 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.833541 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.833614 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s47h5\" (UniqueName: \"kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.834215 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.834272 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.860281 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s47h5\" (UniqueName: \"kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.939439 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:16 crc kubenswrapper[5008]: I0129 15:57:16.118337 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:16 crc kubenswrapper[5008]: W0129 15:57:16.122801 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod584c809e_d445_45a3_84dc_aebb0ab47f1d.slice/crio-d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c WatchSource:0}: Error finding container d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c: Status 404 returned error can't find the container with id d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c Jan 29 15:57:16 crc kubenswrapper[5008]: I0129 15:57:16.810351 5008 generic.go:334] "Generic (PLEG): container finished" podID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerID="105b9a43249e6967af25433d63396c59e60e556a090d580d57d9d70ee4546248" exitCode=0 Jan 29 15:57:16 crc kubenswrapper[5008]: I0129 15:57:16.810483 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerDied","Data":"105b9a43249e6967af25433d63396c59e60e556a090d580d57d9d70ee4546248"} Jan 29 15:57:16 crc kubenswrapper[5008]: I0129 15:57:16.810660 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerStarted","Data":"d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c"} Jan 29 15:57:18 crc kubenswrapper[5008]: I0129 15:57:18.830057 5008 generic.go:334] "Generic (PLEG): container finished" podID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerID="e9bb0bb4b88e5113680d7a705c1a4e73f76938c8a06828dd6b4734e57b5342fa" exitCode=0 Jan 29 15:57:18 crc kubenswrapper[5008]: I0129 15:57:18.830111 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerDied","Data":"e9bb0bb4b88e5113680d7a705c1a4e73f76938c8a06828dd6b4734e57b5342fa"} Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.014710 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.017316 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.032854 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.120577 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.120867 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9zlf\" (UniqueName: \"kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.120926 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.223122 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9zlf\" (UniqueName: \"kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.223384 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.223541 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.224042 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.224161 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.243024 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9zlf\" (UniqueName: \"kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.341074 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.839033 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:19 crc kubenswrapper[5008]: W0129 15:57:19.841981 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b365eb1_533a_4b4a_92ed_da844f0144ee.slice/crio-1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b WatchSource:0}: Error finding container 1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b: Status 404 returned error can't find the container with id 1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b Jan 29 15:57:20 crc kubenswrapper[5008]: I0129 15:57:20.853616 5008 generic.go:334] "Generic (PLEG): container finished" podID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerID="029122710bec3ead5773dc17d19527fcf835c2079cb3b4366dd751781af68880" exitCode=0 Jan 29 15:57:20 crc kubenswrapper[5008]: I0129 15:57:20.853702 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerDied","Data":"029122710bec3ead5773dc17d19527fcf835c2079cb3b4366dd751781af68880"} Jan 29 15:57:20 crc kubenswrapper[5008]: I0129 15:57:20.854064 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerStarted","Data":"1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b"} Jan 29 15:57:20 crc kubenswrapper[5008]: I0129 15:57:20.861660 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerStarted","Data":"560c4a087d72c5b97173f2148e008364217cf3873e93b9ddf90930a6cb837f82"} Jan 29 15:57:20 crc kubenswrapper[5008]: I0129 15:57:20.906570 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fsf7n" podStartSLOduration=3.192063329 podStartE2EDuration="6.906550728s" podCreationTimestamp="2026-01-29 15:57:14 +0000 UTC" firstStartedPulling="2026-01-29 15:57:16.814960336 +0000 UTC m=+1780.487814573" lastFinishedPulling="2026-01-29 15:57:20.529447715 +0000 UTC m=+1784.202301972" observedRunningTime="2026-01-29 15:57:20.893158215 +0000 UTC m=+1784.566012482" watchObservedRunningTime="2026-01-29 15:57:20.906550728 +0000 UTC m=+1784.579404955" Jan 29 15:57:21 crc kubenswrapper[5008]: I0129 15:57:21.324580 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:57:21 crc kubenswrapper[5008]: E0129 15:57:21.324861 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:57:22 crc kubenswrapper[5008]: I0129 15:57:22.884212 5008 generic.go:334] "Generic (PLEG): container finished" podID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerID="c2bc36fbe8f3e25d7d68a9f461e1ef0730dfb9b9c4a4ac61922941d595122f44" exitCode=0 Jan 29 15:57:22 crc kubenswrapper[5008]: I0129 15:57:22.884380 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerDied","Data":"c2bc36fbe8f3e25d7d68a9f461e1ef0730dfb9b9c4a4ac61922941d595122f44"} Jan 29 15:57:23 crc kubenswrapper[5008]: I0129 15:57:23.896374 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerStarted","Data":"b2349ea6eb40feb88475ff1a1d63808b9c3d0aa5c899aef5d037351e78d59f1c"} Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.542289 5008 scope.go:117] "RemoveContainer" containerID="08622f8ad03658b22a0476180ef40d122a3ce215734ba57beccde8e385c5d87a" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.575827 5008 scope.go:117] "RemoveContainer" containerID="d9b41e67155f529dbd273cfba785076257b2721a371f6a0e62d1c4355eb9512a" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.644035 5008 scope.go:117] "RemoveContainer" containerID="a31808be1fa3bc4b89dfda7f79836da13bf6f5c2671c33471c5061bfc1edc1ea" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.662175 5008 scope.go:117] "RemoveContainer" containerID="d694dd74760c7fb5bcb25c24900b008d41d6e4127c92f70bb60fd3e6fc52c215" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.710142 5008 scope.go:117] "RemoveContainer" containerID="88e4435b5bfd1a79780b926cd500b5d39ca87b3e8a648cc8d9d789e4cf17dfd1" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.758363 5008 scope.go:117] "RemoveContainer" containerID="c12146b73a51a5482b71661513ea3874dfe91fc50f839323c14bf1dbe55d4888" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.812921 5008 scope.go:117] "RemoveContainer" containerID="9c021d2423056bd1e8f0c03523a2b976398e77dc14de7fa3b22ff99a7e7bf44a" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.939742 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hcbjg" podStartSLOduration=4.213876191 podStartE2EDuration="6.939721764s" podCreationTimestamp="2026-01-29 15:57:18 +0000 UTC" firstStartedPulling="2026-01-29 15:57:20.856741719 +0000 UTC m=+1784.529595956" lastFinishedPulling="2026-01-29 15:57:23.582587282 +0000 UTC m=+1787.255441529" observedRunningTime="2026-01-29 15:57:24.932681304 +0000 UTC m=+1788.605535541" watchObservedRunningTime="2026-01-29 15:57:24.939721764 +0000 UTC m=+1788.612576021" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.939807 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.940468 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:25 crc kubenswrapper[5008]: I0129 15:57:25.000864 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:25 crc kubenswrapper[5008]: E0129 15:57:25.326654 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:57:25 crc kubenswrapper[5008]: I0129 15:57:25.989648 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:27 crc kubenswrapper[5008]: I0129 15:57:27.191214 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:27 crc kubenswrapper[5008]: I0129 15:57:27.942742 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fsf7n" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="registry-server" containerID="cri-o://560c4a087d72c5b97173f2148e008364217cf3873e93b9ddf90930a6cb837f82" gracePeriod=2 Jan 29 15:57:28 crc kubenswrapper[5008]: I0129 15:57:28.958166 5008 generic.go:334] "Generic (PLEG): container finished" podID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerID="560c4a087d72c5b97173f2148e008364217cf3873e93b9ddf90930a6cb837f82" exitCode=0 Jan 29 15:57:28 crc kubenswrapper[5008]: I0129 15:57:28.958264 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerDied","Data":"560c4a087d72c5b97173f2148e008364217cf3873e93b9ddf90930a6cb837f82"} Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.079139 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2158-account-create-update-pjst9"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.103312 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-351a-account-create-update-tbrc5"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.111400 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-ls2rz"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.119806 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9316-account-create-update-hpxxq"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.129692 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-8sctv"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.136395 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2158-account-create-update-pjst9"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.146380 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-351a-account-create-update-tbrc5"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.156677 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ch7lz"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.167860 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-ls2rz"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.172556 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ch7lz"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.180938 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9316-account-create-update-hpxxq"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.190956 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-8sctv"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.333539 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0494524d-f73e-4534-9064-b578d41bea87" path="/var/lib/kubelet/pods/0494524d-f73e-4534-9064-b578d41bea87/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.334145 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36bf973b-f73a-425e-9923-09caa2622a41" path="/var/lib/kubelet/pods/36bf973b-f73a-425e-9923-09caa2622a41/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.334652 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" path="/var/lib/kubelet/pods/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.335170 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75706daa-3e40-4bbe-bb1b-44120719d48d" path="/var/lib/kubelet/pods/75706daa-3e40-4bbe-bb1b-44120719d48d/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.336184 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" path="/var/lib/kubelet/pods/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.336673 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc0f9ba-13f2-4092-b3e4-a5744ae24174" path="/var/lib/kubelet/pods/bbc0f9ba-13f2-4092-b3e4-a5744ae24174/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.342144 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.342185 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.384812 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.970237 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerDied","Data":"d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c"} Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.970661 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.016284 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.049091 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.120569 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities\") pod \"584c809e-d445-45a3-84dc-aebb0ab47f1d\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.120678 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content\") pod \"584c809e-d445-45a3-84dc-aebb0ab47f1d\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.120916 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s47h5\" (UniqueName: \"kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5\") pod \"584c809e-d445-45a3-84dc-aebb0ab47f1d\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.121803 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities" (OuterVolumeSpecName: "utilities") pod "584c809e-d445-45a3-84dc-aebb0ab47f1d" (UID: "584c809e-d445-45a3-84dc-aebb0ab47f1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.127198 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5" (OuterVolumeSpecName: "kube-api-access-s47h5") pod "584c809e-d445-45a3-84dc-aebb0ab47f1d" (UID: "584c809e-d445-45a3-84dc-aebb0ab47f1d"). InnerVolumeSpecName "kube-api-access-s47h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.149510 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584c809e-d445-45a3-84dc-aebb0ab47f1d" (UID: "584c809e-d445-45a3-84dc-aebb0ab47f1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.223647 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s47h5\" (UniqueName: \"kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.224031 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.224123 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.985945 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.991907 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:31 crc kubenswrapper[5008]: I0129 15:57:31.041847 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:31 crc kubenswrapper[5008]: I0129 15:57:31.050406 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:31 crc kubenswrapper[5008]: E0129 15:57:31.205866 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod584c809e_d445_45a3_84dc_aebb0ab47f1d.slice\": RecentStats: unable to find data in memory cache]" Jan 29 15:57:31 crc kubenswrapper[5008]: I0129 15:57:31.336752 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" path="/var/lib/kubelet/pods/584c809e-d445-45a3-84dc-aebb0ab47f1d/volumes" Jan 29 15:57:31 crc kubenswrapper[5008]: I0129 15:57:31.996287 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hcbjg" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="registry-server" containerID="cri-o://b2349ea6eb40feb88475ff1a1d63808b9c3d0aa5c899aef5d037351e78d59f1c" gracePeriod=2 Jan 29 15:57:32 crc kubenswrapper[5008]: I0129 15:57:32.324171 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:57:32 crc kubenswrapper[5008]: E0129 15:57:32.324770 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.007895 5008 generic.go:334] "Generic (PLEG): container finished" podID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerID="b2349ea6eb40feb88475ff1a1d63808b9c3d0aa5c899aef5d037351e78d59f1c" exitCode=0 Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.007933 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerDied","Data":"b2349ea6eb40feb88475ff1a1d63808b9c3d0aa5c899aef5d037351e78d59f1c"} Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.007956 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerDied","Data":"1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b"} Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.007965 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.045044 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.177210 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9zlf\" (UniqueName: \"kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf\") pod \"2b365eb1-533a-4b4a-92ed-da844f0144ee\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.177293 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities\") pod \"2b365eb1-533a-4b4a-92ed-da844f0144ee\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.177373 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content\") pod \"2b365eb1-533a-4b4a-92ed-da844f0144ee\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.179255 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities" (OuterVolumeSpecName: "utilities") pod "2b365eb1-533a-4b4a-92ed-da844f0144ee" (UID: "2b365eb1-533a-4b4a-92ed-da844f0144ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.186007 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf" (OuterVolumeSpecName: "kube-api-access-x9zlf") pod "2b365eb1-533a-4b4a-92ed-da844f0144ee" (UID: "2b365eb1-533a-4b4a-92ed-da844f0144ee"). InnerVolumeSpecName "kube-api-access-x9zlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.279524 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9zlf\" (UniqueName: \"kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.279560 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:34 crc kubenswrapper[5008]: I0129 15:57:34.016986 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:35 crc kubenswrapper[5008]: I0129 15:57:35.087849 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b365eb1-533a-4b4a-92ed-da844f0144ee" (UID: "2b365eb1-533a-4b4a-92ed-da844f0144ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:35 crc kubenswrapper[5008]: I0129 15:57:35.117210 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:35 crc kubenswrapper[5008]: I0129 15:57:35.260754 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:35 crc kubenswrapper[5008]: I0129 15:57:35.267801 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:35 crc kubenswrapper[5008]: I0129 15:57:35.335363 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" path="/var/lib/kubelet/pods/2b365eb1-533a-4b4a-92ed-da844f0144ee/volumes" Jan 29 15:57:37 crc kubenswrapper[5008]: E0129 15:57:37.331052 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:57:44 crc kubenswrapper[5008]: I0129 15:57:44.323724 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:57:44 crc kubenswrapper[5008]: E0129 15:57:44.324523 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:57:51 crc kubenswrapper[5008]: E0129 15:57:51.329256 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:57:55 crc kubenswrapper[5008]: I0129 15:57:55.323897 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:57:55 crc kubenswrapper[5008]: E0129 15:57:55.324811 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:58:03 crc kubenswrapper[5008]: E0129 15:58:03.326423 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:58:06 crc kubenswrapper[5008]: I0129 15:58:06.324488 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:58:06 crc kubenswrapper[5008]: E0129 15:58:06.325348 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:58:09 crc kubenswrapper[5008]: I0129 15:58:09.061682 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rdpcb"] Jan 29 15:58:09 crc kubenswrapper[5008]: I0129 15:58:09.072056 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rdpcb"] Jan 29 15:58:09 crc kubenswrapper[5008]: I0129 15:58:09.335769 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" path="/var/lib/kubelet/pods/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183/volumes" Jan 29 15:58:14 crc kubenswrapper[5008]: E0129 15:58:14.454477 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:58:14 crc kubenswrapper[5008]: E0129 15:58:14.455359 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:58:14 crc kubenswrapper[5008]: E0129 15:58:14.456613 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:58:21 crc kubenswrapper[5008]: I0129 15:58:21.323914 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:58:21 crc kubenswrapper[5008]: E0129 15:58:21.324680 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.082891 5008 scope.go:117] "RemoveContainer" containerID="eacc0139ac8b112a9da7c9f07cae68774d1d37d4498b8a7bcd2ca73c4e6b805f" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.122445 5008 scope.go:117] "RemoveContainer" containerID="f7337579b0c05cef5036ba373b06ec94f4c86859c74c4cf38a1a6c866cfa3d5e" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.159151 5008 scope.go:117] "RemoveContainer" containerID="6c61687e12f73c515f558a6a4b2824cb17762d52f0bf2ebbaaed1f1b074de225" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.210735 5008 scope.go:117] "RemoveContainer" containerID="e3f4a0bf80eb8c9f3329a22ef35badafd100d8a972517b1491615c6612a7b55a" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.243678 5008 scope.go:117] "RemoveContainer" containerID="ca99078315f1792020893b0155199b35cf28a5d2e22b71f951d215c87d9c1097" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.317181 5008 scope.go:117] "RemoveContainer" containerID="6f05c53cf48d2a332db38d95de29d8cfb8a983e457e1d6fed6a77e002f9f5183" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.356317 5008 scope.go:117] "RemoveContainer" containerID="64cf9712b9a6a018d4f38c41a288a8f15705222afe6688de0979f4ea4ab02893" Jan 29 15:58:27 crc kubenswrapper[5008]: E0129 15:58:27.333952 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:58:34 crc kubenswrapper[5008]: I0129 15:58:34.323901 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:58:34 crc kubenswrapper[5008]: E0129 15:58:34.324546 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:58:42 crc kubenswrapper[5008]: E0129 15:58:42.327049 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:58:46 crc kubenswrapper[5008]: I0129 15:58:46.323760 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:58:46 crc kubenswrapper[5008]: E0129 15:58:46.324452 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:58:54 crc kubenswrapper[5008]: E0129 15:58:54.326225 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:58:58 crc kubenswrapper[5008]: I0129 15:58:58.323601 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:58:58 crc kubenswrapper[5008]: E0129 15:58:58.324052 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:59:09 crc kubenswrapper[5008]: E0129 15:59:09.327868 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:59:12 crc kubenswrapper[5008]: I0129 15:59:12.323899 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:59:12 crc kubenswrapper[5008]: E0129 15:59:12.324866 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:59:24 crc kubenswrapper[5008]: E0129 15:59:24.325943 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:59:25 crc kubenswrapper[5008]: I0129 15:59:25.526419 5008 scope.go:117] "RemoveContainer" containerID="bcb62e0a30103f70c2e23448f433250c8f5931d78a534a384a1188d58be16119" Jan 29 15:59:25 crc kubenswrapper[5008]: I0129 15:59:25.547630 5008 scope.go:117] "RemoveContainer" containerID="f79f38ff0afa3885296e624a49ae42810a26d27a384ceccb3214269c19350348" Jan 29 15:59:26 crc kubenswrapper[5008]: I0129 15:59:26.324411 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:59:26 crc kubenswrapper[5008]: E0129 15:59:26.324988 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.052378 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tqc26"] Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.061928 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dkqkc"] Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.071210 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tqc26"] Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.078962 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dkqkc"] Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.338028 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39abc131-ba3e-4cd8-916a-520789627dd5" path="/var/lib/kubelet/pods/39abc131-ba3e-4cd8-916a-520789627dd5/volumes" Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.338934 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a233d5-bf7f-4906-881c-5e81ea64e0e8" path="/var/lib/kubelet/pods/c3a233d5-bf7f-4906-881c-5e81ea64e0e8/volumes" Jan 29 15:59:37 crc kubenswrapper[5008]: I0129 15:59:37.329427 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:59:37 crc kubenswrapper[5008]: E0129 15:59:37.330020 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:59:38 crc kubenswrapper[5008]: E0129 15:59:38.327464 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:59:42 crc kubenswrapper[5008]: I0129 15:59:42.032035 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-n7wgw"] Jan 29 15:59:42 crc kubenswrapper[5008]: I0129 15:59:42.043357 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-n7wgw"] Jan 29 15:59:43 crc kubenswrapper[5008]: I0129 15:59:43.337610 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" path="/var/lib/kubelet/pods/8277eb2b-44f8-4fd9-af92-1832e0272e0e/volumes" Jan 29 15:59:48 crc kubenswrapper[5008]: I0129 15:59:48.325484 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:59:48 crc kubenswrapper[5008]: E0129 15:59:48.326963 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:59:51 crc kubenswrapper[5008]: I0129 15:59:51.034433 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-fwhd5"] Jan 29 15:59:51 crc kubenswrapper[5008]: I0129 15:59:51.044861 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-fwhd5"] Jan 29 15:59:51 crc kubenswrapper[5008]: E0129 15:59:51.327723 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:59:51 crc kubenswrapper[5008]: I0129 15:59:51.342497 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" path="/var/lib/kubelet/pods/9069f34b-ed91-4ced-8b05-91b83dd02938/volumes" Jan 29 15:59:53 crc kubenswrapper[5008]: I0129 15:59:53.030760 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4h8lc"] Jan 29 15:59:53 crc kubenswrapper[5008]: I0129 15:59:53.043177 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4h8lc"] Jan 29 15:59:53 crc kubenswrapper[5008]: I0129 15:59:53.343211 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c2a1a18-16ff-4419-b233-8649579edbea" path="/var/lib/kubelet/pods/6c2a1a18-16ff-4419-b233-8649579edbea/volumes" Jan 29 15:59:59 crc kubenswrapper[5008]: I0129 15:59:59.325215 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:59:59 crc kubenswrapper[5008]: E0129 15:59:59.326431 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.146685 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s"] Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147478 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="extract-utilities" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147512 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="extract-utilities" Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147550 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="extract-content" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147562 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="extract-content" Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147583 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147592 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147607 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="extract-utilities" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147615 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="extract-utilities" Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147640 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147646 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147655 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="extract-content" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147661 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="extract-content" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147918 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147935 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.148909 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.152279 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.152496 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.157099 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s"] Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.278757 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsx9t\" (UniqueName: \"kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.278852 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.278874 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.380459 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsx9t\" (UniqueName: \"kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.380538 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.380566 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.381601 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.388439 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.396600 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsx9t\" (UniqueName: \"kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.471031 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.894248 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s"] Jan 29 16:00:01 crc kubenswrapper[5008]: I0129 16:00:01.494708 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" event={"ID":"06e8011a-cb7c-4dea-a014-3053cd43b7a1","Type":"ContainerStarted","Data":"0e0ab61946d23622a6cb2e540a378fca32853129e8933de727cd54442908ab35"} Jan 29 16:00:01 crc kubenswrapper[5008]: I0129 16:00:01.494766 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" event={"ID":"06e8011a-cb7c-4dea-a014-3053cd43b7a1","Type":"ContainerStarted","Data":"fa37749a9812f438ecdf7408e29b49e813d4f12c3c52d260d88c57b975a68b39"} Jan 29 16:00:01 crc kubenswrapper[5008]: I0129 16:00:01.518187 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" podStartSLOduration=1.518165251 podStartE2EDuration="1.518165251s" podCreationTimestamp="2026-01-29 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:00:01.509149144 +0000 UTC m=+1945.182003401" watchObservedRunningTime="2026-01-29 16:00:01.518165251 +0000 UTC m=+1945.191019478" Jan 29 16:00:02 crc kubenswrapper[5008]: I0129 16:00:02.507548 5008 generic.go:334] "Generic (PLEG): container finished" podID="06e8011a-cb7c-4dea-a014-3053cd43b7a1" containerID="0e0ab61946d23622a6cb2e540a378fca32853129e8933de727cd54442908ab35" exitCode=0 Jan 29 16:00:02 crc kubenswrapper[5008]: I0129 16:00:02.507625 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" event={"ID":"06e8011a-cb7c-4dea-a014-3053cd43b7a1","Type":"ContainerDied","Data":"0e0ab61946d23622a6cb2e540a378fca32853129e8933de727cd54442908ab35"} Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.835318 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.945369 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume\") pod \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.945565 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume\") pod \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.945642 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsx9t\" (UniqueName: \"kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t\") pod \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.946337 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "06e8011a-cb7c-4dea-a014-3053cd43b7a1" (UID: "06e8011a-cb7c-4dea-a014-3053cd43b7a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.953009 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "06e8011a-cb7c-4dea-a014-3053cd43b7a1" (UID: "06e8011a-cb7c-4dea-a014-3053cd43b7a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.953185 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t" (OuterVolumeSpecName: "kube-api-access-dsx9t") pod "06e8011a-cb7c-4dea-a014-3053cd43b7a1" (UID: "06e8011a-cb7c-4dea-a014-3053cd43b7a1"). InnerVolumeSpecName "kube-api-access-dsx9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.047648 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.047916 5008 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.047925 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsx9t\" (UniqueName: \"kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[5008]: E0129 16:00:04.325367 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.529681 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" event={"ID":"06e8011a-cb7c-4dea-a014-3053cd43b7a1","Type":"ContainerDied","Data":"fa37749a9812f438ecdf7408e29b49e813d4f12c3c52d260d88c57b975a68b39"} Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.529757 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa37749a9812f438ecdf7408e29b49e813d4f12c3c52d260d88c57b975a68b39" Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.529830 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:08 crc kubenswrapper[5008]: I0129 16:00:08.048654 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rcl2z"] Jan 29 16:00:08 crc kubenswrapper[5008]: I0129 16:00:08.058808 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rcl2z"] Jan 29 16:00:09 crc kubenswrapper[5008]: I0129 16:00:09.430128 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" path="/var/lib/kubelet/pods/4ec0e696-652d-463e-b97e-dad0065a543b/volumes" Jan 29 16:00:14 crc kubenswrapper[5008]: I0129 16:00:14.323950 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:00:14 crc kubenswrapper[5008]: E0129 16:00:14.324489 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:00:19 crc kubenswrapper[5008]: E0129 16:00:19.328019 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.323562 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:00:25 crc kubenswrapper[5008]: E0129 16:00:25.324583 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.609305 5008 scope.go:117] "RemoveContainer" containerID="d1071455a85ae82bd88cb84ca9e9539c64ca11a3c5fff1412a478114adf32c80" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.654677 5008 scope.go:117] "RemoveContainer" containerID="0d834ba968e6d63e097a6aef362d3f06eb5d6b998580ed84a27255f328fc86b5" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.708638 5008 scope.go:117] "RemoveContainer" containerID="bde50669bd65351b30c48ee0e65fb0911aba9f1d7624eae95461658432ebf883" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.755913 5008 scope.go:117] "RemoveContainer" containerID="4235463096f31772a59e698a0a90916f6b2c055027357bae8128e733c3b9757d" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.789692 5008 scope.go:117] "RemoveContainer" containerID="9b5824f48cc959e52e85d63863855d59e169e89e7ec31bd5ec6b371bffc34475" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.837450 5008 scope.go:117] "RemoveContainer" containerID="ea56cb31969ede4dc77690e8380474b589122f4e8ba458f2575d15b6351054fb" Jan 29 16:00:30 crc kubenswrapper[5008]: E0129 16:00:30.326536 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:00:31 crc kubenswrapper[5008]: I0129 16:00:31.266221 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-ztdsl_3c5e8be2-fe94-488c-801e-d1a56700bfa5/cluster-samples-operator/0.log" Jan 29 16:00:31 crc kubenswrapper[5008]: I0129 16:00:31.266497 5008 generic.go:334] "Generic (PLEG): container finished" podID="3c5e8be2-fe94-488c-801e-d1a56700bfa5" containerID="100ecffc6cff9494691eabff05729c4d5b7c0766f0e736a4cc1be50aa03aa882" exitCode=2 Jan 29 16:00:31 crc kubenswrapper[5008]: I0129 16:00:31.266529 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" event={"ID":"3c5e8be2-fe94-488c-801e-d1a56700bfa5","Type":"ContainerDied","Data":"100ecffc6cff9494691eabff05729c4d5b7c0766f0e736a4cc1be50aa03aa882"} Jan 29 16:00:31 crc kubenswrapper[5008]: I0129 16:00:31.267156 5008 scope.go:117] "RemoveContainer" containerID="100ecffc6cff9494691eabff05729c4d5b7c0766f0e736a4cc1be50aa03aa882" Jan 29 16:00:32 crc kubenswrapper[5008]: I0129 16:00:32.279874 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-ztdsl_3c5e8be2-fe94-488c-801e-d1a56700bfa5/cluster-samples-operator/0.log" Jan 29 16:00:32 crc kubenswrapper[5008]: I0129 16:00:32.280212 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" event={"ID":"3c5e8be2-fe94-488c-801e-d1a56700bfa5","Type":"ContainerStarted","Data":"d4cdaff99bba5504668a15c6176a1c591e22146a445a284bad1d8535fe560b21"} Jan 29 16:00:39 crc kubenswrapper[5008]: I0129 16:00:39.324826 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:00:39 crc kubenswrapper[5008]: E0129 16:00:39.326142 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:00:42 crc kubenswrapper[5008]: E0129 16:00:42.327151 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:00:44 crc kubenswrapper[5008]: I0129 16:00:44.057362 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-stxgj"] Jan 29 16:00:44 crc kubenswrapper[5008]: I0129 16:00:44.066869 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-lmdpk"] Jan 29 16:00:44 crc kubenswrapper[5008]: I0129 16:00:44.076645 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-stxgj"] Jan 29 16:00:44 crc kubenswrapper[5008]: I0129 16:00:44.107452 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-lmdpk"] Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.027823 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-9xnkt"] Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.037943 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e284-account-create-update-cz9rj"] Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.047858 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-9xnkt"] Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.057839 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e284-account-create-update-cz9rj"] Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.334348 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="110f96e6-c230-44f3-9247-90283da8976c" path="/var/lib/kubelet/pods/110f96e6-c230-44f3-9247-90283da8976c/volumes" Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.335289 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f34f608-b2f8-452e-8f0d-ef600929c36e" path="/var/lib/kubelet/pods/7f34f608-b2f8-452e-8f0d-ef600929c36e/volumes" Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.335908 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" path="/var/lib/kubelet/pods/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e/volumes" Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.336515 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a58042-fefd-43b8-b186-905dcfc7b1af" path="/var/lib/kubelet/pods/d6a58042-fefd-43b8-b186-905dcfc7b1af/volumes" Jan 29 16:00:46 crc kubenswrapper[5008]: I0129 16:00:46.037075 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fe67-account-create-update-bk5t9"] Jan 29 16:00:46 crc kubenswrapper[5008]: I0129 16:00:46.048679 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4e36-account-create-update-mthn6"] Jan 29 16:00:46 crc kubenswrapper[5008]: I0129 16:00:46.058031 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-fe67-account-create-update-bk5t9"] Jan 29 16:00:46 crc kubenswrapper[5008]: I0129 16:00:46.066649 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4e36-account-create-update-mthn6"] Jan 29 16:00:47 crc kubenswrapper[5008]: I0129 16:00:47.341955 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f2899c-3ee5-4d2c-ae4f-487783fede07" path="/var/lib/kubelet/pods/63f2899c-3ee5-4d2c-ae4f-487783fede07/volumes" Jan 29 16:00:47 crc kubenswrapper[5008]: I0129 16:00:47.343264 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="804a6c8c-4d3d-4949-adad-bf28d059ac39" path="/var/lib/kubelet/pods/804a6c8c-4d3d-4949-adad-bf28d059ac39/volumes" Jan 29 16:00:54 crc kubenswrapper[5008]: I0129 16:00:54.324182 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:00:54 crc kubenswrapper[5008]: E0129 16:00:54.325061 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:00:56 crc kubenswrapper[5008]: E0129 16:00:56.326966 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.175532 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29495041-5xjnv"] Jan 29 16:01:00 crc kubenswrapper[5008]: E0129 16:01:00.176737 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e8011a-cb7c-4dea-a014-3053cd43b7a1" containerName="collect-profiles" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.176761 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e8011a-cb7c-4dea-a014-3053cd43b7a1" containerName="collect-profiles" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.177117 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e8011a-cb7c-4dea-a014-3053cd43b7a1" containerName="collect-profiles" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.178101 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.192158 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495041-5xjnv"] Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.248550 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.248841 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.249243 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.249447 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q69lw\" (UniqueName: \"kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.351701 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.351772 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.352014 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.352063 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q69lw\" (UniqueName: \"kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.357701 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.358048 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.358226 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.369445 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q69lw\" (UniqueName: \"kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.508128 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.949426 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495041-5xjnv"] Jan 29 16:01:00 crc kubenswrapper[5008]: W0129 16:01:00.953426 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b2cbc69_268a_4c30_b9c0_d1352f380259.slice/crio-9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5 WatchSource:0}: Error finding container 9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5: Status 404 returned error can't find the container with id 9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5 Jan 29 16:01:01 crc kubenswrapper[5008]: I0129 16:01:01.530485 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-5xjnv" event={"ID":"3b2cbc69-268a-4c30-b9c0-d1352f380259","Type":"ContainerStarted","Data":"7d30fd222b6ddf98bd917e6ff988a39a08202ad915127d1fd074c5440e004774"} Jan 29 16:01:01 crc kubenswrapper[5008]: I0129 16:01:01.532028 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-5xjnv" event={"ID":"3b2cbc69-268a-4c30-b9c0-d1352f380259","Type":"ContainerStarted","Data":"9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5"} Jan 29 16:01:01 crc kubenswrapper[5008]: I0129 16:01:01.556242 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29495041-5xjnv" podStartSLOduration=1.556213252 podStartE2EDuration="1.556213252s" podCreationTimestamp="2026-01-29 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:01.551704764 +0000 UTC m=+2005.224559051" watchObservedRunningTime="2026-01-29 16:01:01.556213252 +0000 UTC m=+2005.229067529" Jan 29 16:01:03 crc kubenswrapper[5008]: I0129 16:01:03.547953 5008 generic.go:334] "Generic (PLEG): container finished" podID="3b2cbc69-268a-4c30-b9c0-d1352f380259" containerID="7d30fd222b6ddf98bd917e6ff988a39a08202ad915127d1fd074c5440e004774" exitCode=0 Jan 29 16:01:03 crc kubenswrapper[5008]: I0129 16:01:03.548071 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-5xjnv" event={"ID":"3b2cbc69-268a-4c30-b9c0-d1352f380259","Type":"ContainerDied","Data":"7d30fd222b6ddf98bd917e6ff988a39a08202ad915127d1fd074c5440e004774"} Jan 29 16:01:04 crc kubenswrapper[5008]: I0129 16:01:04.931517 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.049378 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q69lw\" (UniqueName: \"kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw\") pod \"3b2cbc69-268a-4c30-b9c0-d1352f380259\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.049736 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle\") pod \"3b2cbc69-268a-4c30-b9c0-d1352f380259\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.049805 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys\") pod \"3b2cbc69-268a-4c30-b9c0-d1352f380259\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.049896 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data\") pod \"3b2cbc69-268a-4c30-b9c0-d1352f380259\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.056929 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3b2cbc69-268a-4c30-b9c0-d1352f380259" (UID: "3b2cbc69-268a-4c30-b9c0-d1352f380259"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.057284 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw" (OuterVolumeSpecName: "kube-api-access-q69lw") pod "3b2cbc69-268a-4c30-b9c0-d1352f380259" (UID: "3b2cbc69-268a-4c30-b9c0-d1352f380259"). InnerVolumeSpecName "kube-api-access-q69lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.080456 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b2cbc69-268a-4c30-b9c0-d1352f380259" (UID: "3b2cbc69-268a-4c30-b9c0-d1352f380259"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.101044 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data" (OuterVolumeSpecName: "config-data") pod "3b2cbc69-268a-4c30-b9c0-d1352f380259" (UID: "3b2cbc69-268a-4c30-b9c0-d1352f380259"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.152514 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q69lw\" (UniqueName: \"kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.152561 5008 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.152576 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.152589 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.571040 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-5xjnv" event={"ID":"3b2cbc69-268a-4c30-b9c0-d1352f380259","Type":"ContainerDied","Data":"9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5"} Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.571086 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.571157 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:08 crc kubenswrapper[5008]: E0129 16:01:08.325855 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:01:09 crc kubenswrapper[5008]: I0129 16:01:09.324188 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:01:09 crc kubenswrapper[5008]: E0129 16:01:09.325246 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.914649 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:16 crc kubenswrapper[5008]: E0129 16:01:16.915540 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2cbc69-268a-4c30-b9c0-d1352f380259" containerName="keystone-cron" Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.915555 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2cbc69-268a-4c30-b9c0-d1352f380259" containerName="keystone-cron" Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.915745 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2cbc69-268a-4c30-b9c0-d1352f380259" containerName="keystone-cron" Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.917272 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.927043 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.999542 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2qcm\" (UniqueName: \"kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.000032 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.000148 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.102946 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.103151 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.103199 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2qcm\" (UniqueName: \"kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.103671 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.104583 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.127972 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2qcm\" (UniqueName: \"kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.256065 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.725335 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:18 crc kubenswrapper[5008]: I0129 16:01:18.685491 5008 generic.go:334] "Generic (PLEG): container finished" podID="9b131575-cb55-4ef5-908d-83b174d165d0" containerID="6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac" exitCode=0 Jan 29 16:01:18 crc kubenswrapper[5008]: I0129 16:01:18.685603 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerDied","Data":"6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac"} Jan 29 16:01:18 crc kubenswrapper[5008]: I0129 16:01:18.685935 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerStarted","Data":"e18ece1d64640eef6799f2182daa611c9cd47488c0aef34b85d423cbc390275e"} Jan 29 16:01:18 crc kubenswrapper[5008]: I0129 16:01:18.687501 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:01:20 crc kubenswrapper[5008]: I0129 16:01:20.324032 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:01:20 crc kubenswrapper[5008]: I0129 16:01:20.709894 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb"} Jan 29 16:01:20 crc kubenswrapper[5008]: I0129 16:01:20.712562 5008 generic.go:334] "Generic (PLEG): container finished" podID="9b131575-cb55-4ef5-908d-83b174d165d0" containerID="0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8" exitCode=0 Jan 29 16:01:20 crc kubenswrapper[5008]: I0129 16:01:20.712592 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerDied","Data":"0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8"} Jan 29 16:01:23 crc kubenswrapper[5008]: E0129 16:01:23.328550 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:01:25 crc kubenswrapper[5008]: I0129 16:01:25.757090 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerStarted","Data":"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86"} Jan 29 16:01:25 crc kubenswrapper[5008]: I0129 16:01:25.802690 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5cmr5" podStartSLOduration=3.086401942 podStartE2EDuration="9.802667573s" podCreationTimestamp="2026-01-29 16:01:16 +0000 UTC" firstStartedPulling="2026-01-29 16:01:18.687085464 +0000 UTC m=+2022.359939731" lastFinishedPulling="2026-01-29 16:01:25.403351125 +0000 UTC m=+2029.076205362" observedRunningTime="2026-01-29 16:01:25.787012586 +0000 UTC m=+2029.459866823" watchObservedRunningTime="2026-01-29 16:01:25.802667573 +0000 UTC m=+2029.475521830" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.015899 5008 scope.go:117] "RemoveContainer" containerID="4e5d5fbe6f7326436f09c1eeb706af22dd1889f9d31180f26e9f3a4622f566e8" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.041139 5008 scope.go:117] "RemoveContainer" containerID="9c072e49faa0fcbf14fb26ba5be4f4038a4404627a5b1d14d06a8f9d4347e6b9" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.095093 5008 scope.go:117] "RemoveContainer" containerID="169df0c3000d56c3aa28fc235cca6494757bead3f467fc3b72cab38160ba66e9" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.132947 5008 scope.go:117] "RemoveContainer" containerID="84562c9f10ffe2b7193c90030faf995da403e3f35ef68c087bff6d088be04ae5" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.170963 5008 scope.go:117] "RemoveContainer" containerID="be81fff79545094faefca144ba3c4c81eebfa7419befdbb4509e7d36ea1420d2" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.231631 5008 scope.go:117] "RemoveContainer" containerID="415c274cf2a73d8ccd9cabf2d49c7d2a9afd104170d6b26b6bc768e4e9246896" Jan 29 16:01:27 crc kubenswrapper[5008]: I0129 16:01:27.256247 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:27 crc kubenswrapper[5008]: I0129 16:01:27.256582 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:28 crc kubenswrapper[5008]: I0129 16:01:28.305221 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5cmr5" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="registry-server" probeResult="failure" output=< Jan 29 16:01:28 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 16:01:28 crc kubenswrapper[5008]: > Jan 29 16:01:35 crc kubenswrapper[5008]: E0129 16:01:35.325883 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:01:37 crc kubenswrapper[5008]: I0129 16:01:37.337757 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:37 crc kubenswrapper[5008]: I0129 16:01:37.404114 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:37 crc kubenswrapper[5008]: I0129 16:01:37.578493 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:38 crc kubenswrapper[5008]: I0129 16:01:38.869926 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5cmr5" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="registry-server" containerID="cri-o://627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86" gracePeriod=2 Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.052826 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9mffk"] Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.061027 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9mffk"] Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.319537 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.334653 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00b42485-f42b-4ca6-8e84-1a795454dd9f" path="/var/lib/kubelet/pods/00b42485-f42b-4ca6-8e84-1a795454dd9f/volumes" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.429187 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2qcm\" (UniqueName: \"kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm\") pod \"9b131575-cb55-4ef5-908d-83b174d165d0\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.429445 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities\") pod \"9b131575-cb55-4ef5-908d-83b174d165d0\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.429699 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content\") pod \"9b131575-cb55-4ef5-908d-83b174d165d0\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.431566 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities" (OuterVolumeSpecName: "utilities") pod "9b131575-cb55-4ef5-908d-83b174d165d0" (UID: "9b131575-cb55-4ef5-908d-83b174d165d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.439957 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm" (OuterVolumeSpecName: "kube-api-access-l2qcm") pod "9b131575-cb55-4ef5-908d-83b174d165d0" (UID: "9b131575-cb55-4ef5-908d-83b174d165d0"). InnerVolumeSpecName "kube-api-access-l2qcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.532063 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2qcm\" (UniqueName: \"kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.532093 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.541816 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b131575-cb55-4ef5-908d-83b174d165d0" (UID: "9b131575-cb55-4ef5-908d-83b174d165d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.634574 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.879624 5008 generic.go:334] "Generic (PLEG): container finished" podID="9b131575-cb55-4ef5-908d-83b174d165d0" containerID="627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86" exitCode=0 Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.879663 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerDied","Data":"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86"} Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.879691 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerDied","Data":"e18ece1d64640eef6799f2182daa611c9cd47488c0aef34b85d423cbc390275e"} Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.879706 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.879709 5008 scope.go:117] "RemoveContainer" containerID="627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.917362 5008 scope.go:117] "RemoveContainer" containerID="0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.918126 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.924648 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.935581 5008 scope.go:117] "RemoveContainer" containerID="6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.998272 5008 scope.go:117] "RemoveContainer" containerID="627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86" Jan 29 16:01:39 crc kubenswrapper[5008]: E0129 16:01:39.998977 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86\": container with ID starting with 627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86 not found: ID does not exist" containerID="627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.999008 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86"} err="failed to get container status \"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86\": rpc error: code = NotFound desc = could not find container \"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86\": container with ID starting with 627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86 not found: ID does not exist" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.999029 5008 scope.go:117] "RemoveContainer" containerID="0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8" Jan 29 16:01:39 crc kubenswrapper[5008]: E0129 16:01:39.999698 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8\": container with ID starting with 0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8 not found: ID does not exist" containerID="0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.999832 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8"} err="failed to get container status \"0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8\": rpc error: code = NotFound desc = could not find container \"0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8\": container with ID starting with 0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8 not found: ID does not exist" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.999922 5008 scope.go:117] "RemoveContainer" containerID="6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac" Jan 29 16:01:40 crc kubenswrapper[5008]: E0129 16:01:40.000296 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac\": container with ID starting with 6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac not found: ID does not exist" containerID="6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac" Jan 29 16:01:40 crc kubenswrapper[5008]: I0129 16:01:40.000322 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac"} err="failed to get container status \"6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac\": rpc error: code = NotFound desc = could not find container \"6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac\": container with ID starting with 6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac not found: ID does not exist" Jan 29 16:01:41 crc kubenswrapper[5008]: I0129 16:01:41.336767 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" path="/var/lib/kubelet/pods/9b131575-cb55-4ef5-908d-83b174d165d0/volumes" Jan 29 16:01:50 crc kubenswrapper[5008]: E0129 16:01:50.326898 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:01:57 crc kubenswrapper[5008]: I0129 16:01:57.038942 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2crqc"] Jan 29 16:01:57 crc kubenswrapper[5008]: I0129 16:01:57.048143 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2crqc"] Jan 29 16:01:57 crc kubenswrapper[5008]: I0129 16:01:57.336917 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef9ab07-3037-4115-bb8e-954191b169af" path="/var/lib/kubelet/pods/eef9ab07-3037-4115-bb8e-954191b169af/volumes" Jan 29 16:02:04 crc kubenswrapper[5008]: E0129 16:02:04.327547 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:02:14 crc kubenswrapper[5008]: I0129 16:02:14.027974 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k5vpb"] Jan 29 16:02:14 crc kubenswrapper[5008]: I0129 16:02:14.035721 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k5vpb"] Jan 29 16:02:15 crc kubenswrapper[5008]: I0129 16:02:15.334609 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" path="/var/lib/kubelet/pods/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f/volumes" Jan 29 16:02:19 crc kubenswrapper[5008]: E0129 16:02:19.328007 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:02:26 crc kubenswrapper[5008]: I0129 16:02:26.358719 5008 scope.go:117] "RemoveContainer" containerID="89a0838edd76e8e3384f319feeb4aa997d5c03e52a3680d202106547bff689f7" Jan 29 16:02:26 crc kubenswrapper[5008]: I0129 16:02:26.424044 5008 scope.go:117] "RemoveContainer" containerID="36c4369212a2c18b6f334f104822d0182e207e44849984ff3689c410393720c8" Jan 29 16:02:26 crc kubenswrapper[5008]: I0129 16:02:26.480933 5008 scope.go:117] "RemoveContainer" containerID="cae76da1b19104ec9ac0d79d4c0c18c044c82a9e0fb4665e780db9f6a9a1f05e" Jan 29 16:02:34 crc kubenswrapper[5008]: E0129 16:02:34.327405 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:02:45 crc kubenswrapper[5008]: I0129 16:02:45.061468 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-k4msd"] Jan 29 16:02:45 crc kubenswrapper[5008]: I0129 16:02:45.072561 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-k4msd"] Jan 29 16:02:45 crc kubenswrapper[5008]: I0129 16:02:45.338743 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfacde84-7d28-464b-8854-622fd127956c" path="/var/lib/kubelet/pods/dfacde84-7d28-464b-8854-622fd127956c/volumes" Jan 29 16:02:46 crc kubenswrapper[5008]: E0129 16:02:46.326694 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:02:59 crc kubenswrapper[5008]: E0129 16:02:59.325554 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:03:10 crc kubenswrapper[5008]: E0129 16:03:10.328015 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:03:23 crc kubenswrapper[5008]: E0129 16:03:23.453877 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:03:23 crc kubenswrapper[5008]: E0129 16:03:23.454591 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:03:23 crc kubenswrapper[5008]: E0129 16:03:23.455979 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.624549 5008 scope.go:117] "RemoveContainer" containerID="b2349ea6eb40feb88475ff1a1d63808b9c3d0aa5c899aef5d037351e78d59f1c" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.652601 5008 scope.go:117] "RemoveContainer" containerID="029122710bec3ead5773dc17d19527fcf835c2079cb3b4366dd751781af68880" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.678298 5008 scope.go:117] "RemoveContainer" containerID="fd5b906760d69a40cedcc9755fc25288bec9129c3fde13b9ce243cf6e009d4c4" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.722692 5008 scope.go:117] "RemoveContainer" containerID="105b9a43249e6967af25433d63396c59e60e556a090d580d57d9d70ee4546248" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.743368 5008 scope.go:117] "RemoveContainer" containerID="560c4a087d72c5b97173f2148e008364217cf3873e93b9ddf90930a6cb837f82" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.791417 5008 scope.go:117] "RemoveContainer" containerID="0bd2718859e8227e4d8612c327ecd5f34368bcc87d5e43cf15084febf3a519cd" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.851386 5008 scope.go:117] "RemoveContainer" containerID="e9bb0bb4b88e5113680d7a705c1a4e73f76938c8a06828dd6b4734e57b5342fa" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.872984 5008 scope.go:117] "RemoveContainer" containerID="c2bc36fbe8f3e25d7d68a9f461e1ef0730dfb9b9c4a4ac61922941d595122f44" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.920340 5008 scope.go:117] "RemoveContainer" containerID="7d4815761a9d2f556ee06bbf98cf1b6c8cec425b4632da102c9fe10b76949770" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.960154 5008 scope.go:117] "RemoveContainer" containerID="4c7f1c035bf93e990a09127ab0239b9dd8fb171aad0406e2e4f471771073ce20" Jan 29 16:03:38 crc kubenswrapper[5008]: E0129 16:03:38.325887 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:03:43 crc kubenswrapper[5008]: I0129 16:03:43.991040 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:03:43 crc kubenswrapper[5008]: I0129 16:03:43.991620 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:03:52 crc kubenswrapper[5008]: E0129 16:03:52.326158 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:04:06 crc kubenswrapper[5008]: E0129 16:04:06.327192 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:04:13 crc kubenswrapper[5008]: I0129 16:04:13.990399 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:04:13 crc kubenswrapper[5008]: I0129 16:04:13.990869 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:04:17 crc kubenswrapper[5008]: E0129 16:04:17.330910 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:04:31 crc kubenswrapper[5008]: E0129 16:04:31.326273 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:04:43 crc kubenswrapper[5008]: E0129 16:04:43.326604 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:04:43 crc kubenswrapper[5008]: I0129 16:04:43.990865 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:04:43 crc kubenswrapper[5008]: I0129 16:04:43.991183 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:04:43 crc kubenswrapper[5008]: I0129 16:04:43.991232 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 16:04:43 crc kubenswrapper[5008]: I0129 16:04:43.991928 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:04:43 crc kubenswrapper[5008]: I0129 16:04:43.991986 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb" gracePeriod=600 Jan 29 16:04:44 crc kubenswrapper[5008]: I0129 16:04:44.875674 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb" exitCode=0 Jan 29 16:04:44 crc kubenswrapper[5008]: I0129 16:04:44.876351 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb"} Jan 29 16:04:44 crc kubenswrapper[5008]: I0129 16:04:44.876443 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:04:45 crc kubenswrapper[5008]: I0129 16:04:45.885056 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64"} Jan 29 16:04:55 crc kubenswrapper[5008]: E0129 16:04:55.326729 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:05:06 crc kubenswrapper[5008]: E0129 16:05:06.326421 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:05:21 crc kubenswrapper[5008]: E0129 16:05:21.327321 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:05:33 crc kubenswrapper[5008]: E0129 16:05:33.329392 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:05:47 crc kubenswrapper[5008]: E0129 16:05:47.331827 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:06:02 crc kubenswrapper[5008]: E0129 16:06:02.326991 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:06:15 crc kubenswrapper[5008]: E0129 16:06:15.328361 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:06:29 crc kubenswrapper[5008]: E0129 16:06:29.326149 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:06:44 crc kubenswrapper[5008]: E0129 16:06:44.326993 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:06:55 crc kubenswrapper[5008]: E0129 16:06:55.328410 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:07:06 crc kubenswrapper[5008]: E0129 16:07:06.327337 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:07:13 crc kubenswrapper[5008]: I0129 16:07:13.990763 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:07:13 crc kubenswrapper[5008]: I0129 16:07:13.991225 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:07:21 crc kubenswrapper[5008]: E0129 16:07:21.326681 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.722006 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:07:22 crc kubenswrapper[5008]: E0129 16:07:22.722839 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="extract-content" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.722856 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="extract-content" Jan 29 16:07:22 crc kubenswrapper[5008]: E0129 16:07:22.722876 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="extract-utilities" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.722885 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="extract-utilities" Jan 29 16:07:22 crc kubenswrapper[5008]: E0129 16:07:22.722901 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="registry-server" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.722908 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="registry-server" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.723135 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="registry-server" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.724841 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.729924 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.868882 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.869001 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.869137 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29db7\" (UniqueName: \"kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.971192 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29db7\" (UniqueName: \"kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.971476 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.971617 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.972139 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.972217 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:23 crc kubenswrapper[5008]: I0129 16:07:22.996327 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29db7\" (UniqueName: \"kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:23 crc kubenswrapper[5008]: I0129 16:07:23.050110 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:23 crc kubenswrapper[5008]: I0129 16:07:23.555759 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:07:23 crc kubenswrapper[5008]: W0129 16:07:23.567028 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ed48245_be09_46c8_97f9_263179717512.slice/crio-4e824484315a6e30506a2f7c7fb618d142a68d99bd3176c0a282d8bafa44de26 WatchSource:0}: Error finding container 4e824484315a6e30506a2f7c7fb618d142a68d99bd3176c0a282d8bafa44de26: Status 404 returned error can't find the container with id 4e824484315a6e30506a2f7c7fb618d142a68d99bd3176c0a282d8bafa44de26 Jan 29 16:07:24 crc kubenswrapper[5008]: I0129 16:07:24.253415 5008 generic.go:334] "Generic (PLEG): container finished" podID="2ed48245-be09-46c8-97f9-263179717512" containerID="150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766" exitCode=0 Jan 29 16:07:24 crc kubenswrapper[5008]: I0129 16:07:24.253499 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerDied","Data":"150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766"} Jan 29 16:07:24 crc kubenswrapper[5008]: I0129 16:07:24.253677 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerStarted","Data":"4e824484315a6e30506a2f7c7fb618d142a68d99bd3176c0a282d8bafa44de26"} Jan 29 16:07:24 crc kubenswrapper[5008]: I0129 16:07:24.255468 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:07:24 crc kubenswrapper[5008]: E0129 16:07:24.388424 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:07:24 crc kubenswrapper[5008]: E0129 16:07:24.388712 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:07:24 crc kubenswrapper[5008]: E0129 16:07:24.389967 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:07:25 crc kubenswrapper[5008]: E0129 16:07:25.262840 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:07:34 crc kubenswrapper[5008]: E0129 16:07:34.327595 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:07:38 crc kubenswrapper[5008]: E0129 16:07:38.502695 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:07:38 crc kubenswrapper[5008]: E0129 16:07:38.503210 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:07:38 crc kubenswrapper[5008]: E0129 16:07:38.504373 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:07:43 crc kubenswrapper[5008]: I0129 16:07:43.991270 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:07:43 crc kubenswrapper[5008]: I0129 16:07:43.991904 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:07:45 crc kubenswrapper[5008]: E0129 16:07:45.326163 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:07:53 crc kubenswrapper[5008]: E0129 16:07:53.327582 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.277407 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.280535 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.294641 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.330063 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.330128 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.330149 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcgl4\" (UniqueName: \"kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.431841 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.431921 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.431953 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcgl4\" (UniqueName: \"kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.432382 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.432404 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.452058 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcgl4\" (UniqueName: \"kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.615499 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:56 crc kubenswrapper[5008]: I0129 16:07:56.110768 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:07:56 crc kubenswrapper[5008]: I0129 16:07:56.534940 5008 generic.go:334] "Generic (PLEG): container finished" podID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerID="4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108" exitCode=0 Jan 29 16:07:56 crc kubenswrapper[5008]: I0129 16:07:56.535016 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerDied","Data":"4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108"} Jan 29 16:07:56 crc kubenswrapper[5008]: I0129 16:07:56.535077 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerStarted","Data":"3027721e802c941c68316a40edc4f5165c2ccf1c65e058c580444ac3144242da"} Jan 29 16:07:56 crc kubenswrapper[5008]: E0129 16:07:56.687651 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:07:56 crc kubenswrapper[5008]: E0129 16:07:56.687824 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:07:56 crc kubenswrapper[5008]: E0129 16:07:56.689058 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:07:57 crc kubenswrapper[5008]: E0129 16:07:57.546218 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:08:00 crc kubenswrapper[5008]: E0129 16:08:00.324976 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:08:08 crc kubenswrapper[5008]: E0129 16:08:08.473036 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:08:08 crc kubenswrapper[5008]: E0129 16:08:08.473640 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:08 crc kubenswrapper[5008]: E0129 16:08:08.474845 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:08:11 crc kubenswrapper[5008]: E0129 16:08:11.522368 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:08:11 crc kubenswrapper[5008]: E0129 16:08:11.522768 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:11 crc kubenswrapper[5008]: E0129 16:08:11.524029 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:08:12 crc kubenswrapper[5008]: E0129 16:08:12.325917 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:08:13 crc kubenswrapper[5008]: I0129 16:08:13.991071 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:08:13 crc kubenswrapper[5008]: I0129 16:08:13.991359 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:08:13 crc kubenswrapper[5008]: I0129 16:08:13.991400 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 16:08:13 crc kubenswrapper[5008]: I0129 16:08:13.992105 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:08:13 crc kubenswrapper[5008]: I0129 16:08:13.992156 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" gracePeriod=600 Jan 29 16:08:14 crc kubenswrapper[5008]: E0129 16:08:14.160299 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:08:14 crc kubenswrapper[5008]: I0129 16:08:14.695265 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" exitCode=0 Jan 29 16:08:14 crc kubenswrapper[5008]: I0129 16:08:14.695314 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64"} Jan 29 16:08:14 crc kubenswrapper[5008]: I0129 16:08:14.695381 5008 scope.go:117] "RemoveContainer" containerID="0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb" Jan 29 16:08:14 crc kubenswrapper[5008]: I0129 16:08:14.696037 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:08:14 crc kubenswrapper[5008]: E0129 16:08:14.696413 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.571953 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.574296 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.590272 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.629430 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.629583 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.629653 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8j5q\" (UniqueName: \"kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.731929 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.732012 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8j5q\" (UniqueName: \"kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.732153 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.732748 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.732870 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.754663 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8j5q\" (UniqueName: \"kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.914098 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:16 crc kubenswrapper[5008]: I0129 16:08:16.365118 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:08:16 crc kubenswrapper[5008]: I0129 16:08:16.714838 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerStarted","Data":"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4"} Jan 29 16:08:16 crc kubenswrapper[5008]: I0129 16:08:16.714900 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerStarted","Data":"5b1b00bb2ae97cde561959176674c8591e6b4a491353c5009f561f79b72ee787"} Jan 29 16:08:17 crc kubenswrapper[5008]: I0129 16:08:17.724622 5008 generic.go:334] "Generic (PLEG): container finished" podID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerID="048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4" exitCode=0 Jan 29 16:08:17 crc kubenswrapper[5008]: I0129 16:08:17.724662 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerDied","Data":"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4"} Jan 29 16:08:17 crc kubenswrapper[5008]: E0129 16:08:17.855145 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:08:17 crc kubenswrapper[5008]: E0129 16:08:17.855642 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:17 crc kubenswrapper[5008]: E0129 16:08:17.857287 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:08:18 crc kubenswrapper[5008]: E0129 16:08:18.734333 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:08:21 crc kubenswrapper[5008]: E0129 16:08:21.326673 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:08:24 crc kubenswrapper[5008]: E0129 16:08:24.455012 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:08:24 crc kubenswrapper[5008]: E0129 16:08:24.455888 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:24 crc kubenswrapper[5008]: E0129 16:08:24.457640 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:08:25 crc kubenswrapper[5008]: E0129 16:08:25.326535 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:08:26 crc kubenswrapper[5008]: I0129 16:08:26.324146 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:08:26 crc kubenswrapper[5008]: E0129 16:08:26.324387 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:08:29 crc kubenswrapper[5008]: E0129 16:08:29.454236 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:08:29 crc kubenswrapper[5008]: E0129 16:08:29.454694 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:29 crc kubenswrapper[5008]: E0129 16:08:29.455873 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:08:33 crc kubenswrapper[5008]: E0129 16:08:33.328500 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:08:36 crc kubenswrapper[5008]: E0129 16:08:36.328904 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:08:37 crc kubenswrapper[5008]: I0129 16:08:37.348604 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:08:37 crc kubenswrapper[5008]: E0129 16:08:37.357616 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:08:37 crc kubenswrapper[5008]: E0129 16:08:37.493033 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:08:37 crc kubenswrapper[5008]: E0129 16:08:37.493181 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:37 crc kubenswrapper[5008]: E0129 16:08:37.494363 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:08:42 crc kubenswrapper[5008]: E0129 16:08:42.328564 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:08:45 crc kubenswrapper[5008]: E0129 16:08:45.325527 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:08:49 crc kubenswrapper[5008]: I0129 16:08:49.324518 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:08:49 crc kubenswrapper[5008]: E0129 16:08:49.325566 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:08:50 crc kubenswrapper[5008]: E0129 16:08:50.325898 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:08:50 crc kubenswrapper[5008]: E0129 16:08:50.325916 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.468318 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.468853 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.469956 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.473874 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.473977 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.475150 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:09:00 crc kubenswrapper[5008]: I0129 16:09:00.323947 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:09:00 crc kubenswrapper[5008]: E0129 16:09:00.324671 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:09:03 crc kubenswrapper[5008]: E0129 16:09:03.327234 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:09:05 crc kubenswrapper[5008]: E0129 16:09:05.326200 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:09:08 crc kubenswrapper[5008]: E0129 16:09:08.326021 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:09:09 crc kubenswrapper[5008]: E0129 16:09:09.326052 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:09:13 crc kubenswrapper[5008]: I0129 16:09:13.324265 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:09:13 crc kubenswrapper[5008]: E0129 16:09:13.325717 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:09:15 crc kubenswrapper[5008]: E0129 16:09:15.326817 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:09:20 crc kubenswrapper[5008]: E0129 16:09:20.458444 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:09:20 crc kubenswrapper[5008]: E0129 16:09:20.459101 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:09:20 crc kubenswrapper[5008]: E0129 16:09:20.460355 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:09:22 crc kubenswrapper[5008]: E0129 16:09:22.324655 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:09:24 crc kubenswrapper[5008]: E0129 16:09:24.326365 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:09:28 crc kubenswrapper[5008]: I0129 16:09:28.324075 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:09:28 crc kubenswrapper[5008]: E0129 16:09:28.325017 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:09:29 crc kubenswrapper[5008]: E0129 16:09:29.325966 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:09:31 crc kubenswrapper[5008]: E0129 16:09:31.325242 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:09:36 crc kubenswrapper[5008]: E0129 16:09:36.326170 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:09:36 crc kubenswrapper[5008]: E0129 16:09:36.326216 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:09:39 crc kubenswrapper[5008]: I0129 16:09:39.324347 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:09:39 crc kubenswrapper[5008]: E0129 16:09:39.324923 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:09:43 crc kubenswrapper[5008]: E0129 16:09:43.325849 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:09:44 crc kubenswrapper[5008]: E0129 16:09:44.325695 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:09:47 crc kubenswrapper[5008]: E0129 16:09:47.463064 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:09:47 crc kubenswrapper[5008]: E0129 16:09:47.463626 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:09:47 crc kubenswrapper[5008]: E0129 16:09:47.464909 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:09:50 crc kubenswrapper[5008]: E0129 16:09:50.325936 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:09:51 crc kubenswrapper[5008]: I0129 16:09:51.324631 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:09:51 crc kubenswrapper[5008]: E0129 16:09:51.325198 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:09:57 crc kubenswrapper[5008]: E0129 16:09:57.334151 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:09:58 crc kubenswrapper[5008]: E0129 16:09:58.325571 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:09:59 crc kubenswrapper[5008]: E0129 16:09:59.325824 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:10:02 crc kubenswrapper[5008]: E0129 16:10:02.326566 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:10:03 crc kubenswrapper[5008]: I0129 16:10:03.323763 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:10:03 crc kubenswrapper[5008]: E0129 16:10:03.324580 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:10:10 crc kubenswrapper[5008]: E0129 16:10:10.326726 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:10:11 crc kubenswrapper[5008]: E0129 16:10:11.325856 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:10:14 crc kubenswrapper[5008]: E0129 16:10:14.326593 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:10:15 crc kubenswrapper[5008]: E0129 16:10:15.325235 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:10:16 crc kubenswrapper[5008]: I0129 16:10:16.324476 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:10:16 crc kubenswrapper[5008]: E0129 16:10:16.324808 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:10:22 crc kubenswrapper[5008]: E0129 16:10:22.326048 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:10:25 crc kubenswrapper[5008]: E0129 16:10:25.327413 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:10:26 crc kubenswrapper[5008]: E0129 16:10:26.325380 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:10:28 crc kubenswrapper[5008]: I0129 16:10:28.323552 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:10:28 crc kubenswrapper[5008]: E0129 16:10:28.324059 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:10:30 crc kubenswrapper[5008]: E0129 16:10:30.452565 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:10:30 crc kubenswrapper[5008]: E0129 16:10:30.453112 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:10:30 crc kubenswrapper[5008]: E0129 16:10:30.454325 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:10:36 crc kubenswrapper[5008]: E0129 16:10:36.326544 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:10:38 crc kubenswrapper[5008]: E0129 16:10:38.325320 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:10:39 crc kubenswrapper[5008]: E0129 16:10:39.325581 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:10:41 crc kubenswrapper[5008]: E0129 16:10:41.326252 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:10:42 crc kubenswrapper[5008]: I0129 16:10:42.323948 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:10:42 crc kubenswrapper[5008]: E0129 16:10:42.324280 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:10:47 crc kubenswrapper[5008]: E0129 16:10:47.458698 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:10:47 crc kubenswrapper[5008]: E0129 16:10:47.459223 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:10:47 crc kubenswrapper[5008]: E0129 16:10:47.460425 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:10:50 crc kubenswrapper[5008]: E0129 16:10:50.325961 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:10:52 crc kubenswrapper[5008]: E0129 16:10:52.325660 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:10:55 crc kubenswrapper[5008]: E0129 16:10:55.325730 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:10:57 crc kubenswrapper[5008]: I0129 16:10:57.330424 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:10:57 crc kubenswrapper[5008]: E0129 16:10:57.331023 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:11:01 crc kubenswrapper[5008]: E0129 16:11:01.327458 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:11:04 crc kubenswrapper[5008]: E0129 16:11:04.326828 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:11:04 crc kubenswrapper[5008]: E0129 16:11:04.327107 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:11:08 crc kubenswrapper[5008]: E0129 16:11:08.325850 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:11:10 crc kubenswrapper[5008]: I0129 16:11:10.323938 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:11:10 crc kubenswrapper[5008]: E0129 16:11:10.324505 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:11:13 crc kubenswrapper[5008]: E0129 16:11:13.326991 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:11:15 crc kubenswrapper[5008]: E0129 16:11:15.327358 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:11:16 crc kubenswrapper[5008]: E0129 16:11:16.458323 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:11:16 crc kubenswrapper[5008]: E0129 16:11:16.458715 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:11:16 crc kubenswrapper[5008]: E0129 16:11:16.460183 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:11:20 crc kubenswrapper[5008]: E0129 16:11:20.325876 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:11:23 crc kubenswrapper[5008]: I0129 16:11:23.324226 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:11:23 crc kubenswrapper[5008]: E0129 16:11:23.324859 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:11:26 crc kubenswrapper[5008]: E0129 16:11:26.326819 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:11:28 crc kubenswrapper[5008]: E0129 16:11:28.325296 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:11:28 crc kubenswrapper[5008]: E0129 16:11:28.326672 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:11:32 crc kubenswrapper[5008]: E0129 16:11:32.325982 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:11:35 crc kubenswrapper[5008]: I0129 16:11:35.324925 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:11:35 crc kubenswrapper[5008]: E0129 16:11:35.326213 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:11:37 crc kubenswrapper[5008]: E0129 16:11:37.332154 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:11:40 crc kubenswrapper[5008]: E0129 16:11:40.325648 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:11:40 crc kubenswrapper[5008]: E0129 16:11:40.327193 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:11:43 crc kubenswrapper[5008]: E0129 16:11:43.326390 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:11:48 crc kubenswrapper[5008]: I0129 16:11:48.325447 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:11:48 crc kubenswrapper[5008]: E0129 16:11:48.326741 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:11:50 crc kubenswrapper[5008]: E0129 16:11:50.330577 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:11:53 crc kubenswrapper[5008]: E0129 16:11:53.333453 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:11:53 crc kubenswrapper[5008]: E0129 16:11:53.333508 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:11:58 crc kubenswrapper[5008]: E0129 16:11:58.326234 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:12:03 crc kubenswrapper[5008]: I0129 16:12:03.324126 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:12:03 crc kubenswrapper[5008]: E0129 16:12:03.325656 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:12:03 crc kubenswrapper[5008]: E0129 16:12:03.328912 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:12:04 crc kubenswrapper[5008]: E0129 16:12:04.326035 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:12:07 crc kubenswrapper[5008]: E0129 16:12:07.336511 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:12:12 crc kubenswrapper[5008]: E0129 16:12:12.327403 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:12:14 crc kubenswrapper[5008]: E0129 16:12:14.326284 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:12:17 crc kubenswrapper[5008]: I0129 16:12:17.330423 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:12:17 crc kubenswrapper[5008]: E0129 16:12:17.331145 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:12:19 crc kubenswrapper[5008]: E0129 16:12:19.326347 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:12:21 crc kubenswrapper[5008]: E0129 16:12:21.333217 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:12:26 crc kubenswrapper[5008]: E0129 16:12:26.325878 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:12:29 crc kubenswrapper[5008]: E0129 16:12:29.328802 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:12:30 crc kubenswrapper[5008]: E0129 16:12:30.325552 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:12:32 crc kubenswrapper[5008]: I0129 16:12:32.324305 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:12:32 crc kubenswrapper[5008]: E0129 16:12:32.324822 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:12:32 crc kubenswrapper[5008]: E0129 16:12:32.326206 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:12:37 crc kubenswrapper[5008]: E0129 16:12:37.332961 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:12:42 crc kubenswrapper[5008]: E0129 16:12:42.326880 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:12:42 crc kubenswrapper[5008]: E0129 16:12:42.327384 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:12:44 crc kubenswrapper[5008]: E0129 16:12:44.326254 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:12:47 crc kubenswrapper[5008]: I0129 16:12:47.330973 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:12:47 crc kubenswrapper[5008]: E0129 16:12:47.331686 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:12:52 crc kubenswrapper[5008]: E0129 16:12:52.325431 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:12:53 crc kubenswrapper[5008]: E0129 16:12:53.326585 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:12:54 crc kubenswrapper[5008]: E0129 16:12:54.324764 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:12:55 crc kubenswrapper[5008]: E0129 16:12:55.325255 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:12:58 crc kubenswrapper[5008]: I0129 16:12:58.323494 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:12:58 crc kubenswrapper[5008]: E0129 16:12:58.324022 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:13:06 crc kubenswrapper[5008]: E0129 16:13:06.326514 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:13:07 crc kubenswrapper[5008]: E0129 16:13:07.333374 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:13:08 crc kubenswrapper[5008]: E0129 16:13:08.325259 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:13:10 crc kubenswrapper[5008]: E0129 16:13:10.325527 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:13:13 crc kubenswrapper[5008]: I0129 16:13:13.324572 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:13:13 crc kubenswrapper[5008]: E0129 16:13:13.325042 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:13:19 crc kubenswrapper[5008]: E0129 16:13:19.328535 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:13:20 crc kubenswrapper[5008]: I0129 16:13:20.325690 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:13:20 crc kubenswrapper[5008]: E0129 16:13:20.462679 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:13:20 crc kubenswrapper[5008]: E0129 16:13:20.462909 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:13:20 crc kubenswrapper[5008]: E0129 16:13:20.464448 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:13:23 crc kubenswrapper[5008]: E0129 16:13:23.325048 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:13:24 crc kubenswrapper[5008]: E0129 16:13:24.326308 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:13:26 crc kubenswrapper[5008]: I0129 16:13:26.324382 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:13:27 crc kubenswrapper[5008]: I0129 16:13:27.536096 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957"} Jan 29 16:13:30 crc kubenswrapper[5008]: E0129 16:13:30.326737 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:13:32 crc kubenswrapper[5008]: E0129 16:13:32.325913 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:13:34 crc kubenswrapper[5008]: E0129 16:13:34.447447 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:13:34 crc kubenswrapper[5008]: E0129 16:13:34.448355 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:13:34 crc kubenswrapper[5008]: E0129 16:13:34.450560 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:13:40 crc kubenswrapper[5008]: E0129 16:13:40.449993 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:13:40 crc kubenswrapper[5008]: E0129 16:13:40.450710 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:13:40 crc kubenswrapper[5008]: E0129 16:13:40.452168 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:13:41 crc kubenswrapper[5008]: E0129 16:13:41.325602 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:13:47 crc kubenswrapper[5008]: E0129 16:13:47.332239 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:13:47 crc kubenswrapper[5008]: E0129 16:13:47.332631 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:13:53 crc kubenswrapper[5008]: E0129 16:13:53.327848 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:13:55 crc kubenswrapper[5008]: E0129 16:13:55.325496 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:13:58 crc kubenswrapper[5008]: E0129 16:13:58.328067 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:14:02 crc kubenswrapper[5008]: E0129 16:14:02.326683 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:14:05 crc kubenswrapper[5008]: E0129 16:14:05.325236 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:14:09 crc kubenswrapper[5008]: E0129 16:14:09.326440 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:14:09 crc kubenswrapper[5008]: E0129 16:14:09.453469 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:14:09 crc kubenswrapper[5008]: E0129 16:14:09.453955 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:14:09 crc kubenswrapper[5008]: E0129 16:14:09.455124 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.628093 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.630681 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.644210 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.805658 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl7kv\" (UniqueName: \"kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.805711 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.805750 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.908003 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.908232 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl7kv\" (UniqueName: \"kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.908260 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.909197 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.909598 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.940924 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl7kv\" (UniqueName: \"kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.962865 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:16 crc kubenswrapper[5008]: I0129 16:14:16.454001 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:14:16 crc kubenswrapper[5008]: I0129 16:14:16.933471 5008 generic.go:334] "Generic (PLEG): container finished" podID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerID="5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6" exitCode=0 Jan 29 16:14:16 crc kubenswrapper[5008]: I0129 16:14:16.933667 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerDied","Data":"5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6"} Jan 29 16:14:16 crc kubenswrapper[5008]: I0129 16:14:16.933849 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerStarted","Data":"4bc8d674639c663e12f180fa6c89b4e70c92f8b3fda66ccac4d3e879acdf15cc"} Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.073478 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.073806 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7dqqz_openshift-marketplace(4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.075245 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.331071 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.331377 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.949802 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:14:21 crc kubenswrapper[5008]: E0129 16:14:21.325352 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:14:24 crc kubenswrapper[5008]: E0129 16:14:24.326215 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:14:31 crc kubenswrapper[5008]: E0129 16:14:31.326831 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:14:31 crc kubenswrapper[5008]: E0129 16:14:31.453056 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:14:31 crc kubenswrapper[5008]: E0129 16:14:31.453212 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7dqqz_openshift-marketplace(4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:14:31 crc kubenswrapper[5008]: E0129 16:14:31.454379 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:14:32 crc kubenswrapper[5008]: E0129 16:14:32.325144 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:14:32 crc kubenswrapper[5008]: E0129 16:14:32.325150 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:14:37 crc kubenswrapper[5008]: E0129 16:14:37.332542 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:14:46 crc kubenswrapper[5008]: E0129 16:14:46.326476 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:14:46 crc kubenswrapper[5008]: E0129 16:14:46.326712 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:14:47 crc kubenswrapper[5008]: E0129 16:14:47.331882 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:14:47 crc kubenswrapper[5008]: E0129 16:14:47.332042 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:14:51 crc kubenswrapper[5008]: E0129 16:14:51.327907 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:14:58 crc kubenswrapper[5008]: E0129 16:14:58.325607 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:14:59 crc kubenswrapper[5008]: E0129 16:14:59.325977 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:14:59 crc kubenswrapper[5008]: E0129 16:14:59.465868 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:14:59 crc kubenswrapper[5008]: E0129 16:14:59.466049 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7dqqz_openshift-marketplace(4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:14:59 crc kubenswrapper[5008]: E0129 16:14:59.467259 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.165949 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx"] Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.167484 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.173105 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.176679 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.189263 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx"] Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.257596 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm8vj\" (UniqueName: \"kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.257685 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.257845 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.360954 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm8vj\" (UniqueName: \"kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.361224 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.361395 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.363004 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.369216 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.384518 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm8vj\" (UniqueName: \"kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.498413 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.941905 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx"] Jan 29 16:15:00 crc kubenswrapper[5008]: W0129 16:15:00.945954 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44e772a8_b044_4c03_a83a_4634997d4139.slice/crio-cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c WatchSource:0}: Error finding container cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c: Status 404 returned error can't find the container with id cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c Jan 29 16:15:01 crc kubenswrapper[5008]: I0129 16:15:01.301014 5008 generic.go:334] "Generic (PLEG): container finished" podID="44e772a8-b044-4c03-a83a-4634997d4139" containerID="1955b67636880bbd2ed0bae81f814ec3605cfeeec18fe7a5bbb4a833cb6b1859" exitCode=0 Jan 29 16:15:01 crc kubenswrapper[5008]: I0129 16:15:01.301054 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" event={"ID":"44e772a8-b044-4c03-a83a-4634997d4139","Type":"ContainerDied","Data":"1955b67636880bbd2ed0bae81f814ec3605cfeeec18fe7a5bbb4a833cb6b1859"} Jan 29 16:15:01 crc kubenswrapper[5008]: I0129 16:15:01.301084 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" event={"ID":"44e772a8-b044-4c03-a83a-4634997d4139","Type":"ContainerStarted","Data":"cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c"} Jan 29 16:15:02 crc kubenswrapper[5008]: E0129 16:15:02.325688 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.657020 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.818001 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume\") pod \"44e772a8-b044-4c03-a83a-4634997d4139\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.818154 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume\") pod \"44e772a8-b044-4c03-a83a-4634997d4139\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.818210 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm8vj\" (UniqueName: \"kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj\") pod \"44e772a8-b044-4c03-a83a-4634997d4139\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.819146 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume" (OuterVolumeSpecName: "config-volume") pod "44e772a8-b044-4c03-a83a-4634997d4139" (UID: "44e772a8-b044-4c03-a83a-4634997d4139"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.823505 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44e772a8-b044-4c03-a83a-4634997d4139" (UID: "44e772a8-b044-4c03-a83a-4634997d4139"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.823872 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj" (OuterVolumeSpecName: "kube-api-access-wm8vj") pod "44e772a8-b044-4c03-a83a-4634997d4139" (UID: "44e772a8-b044-4c03-a83a-4634997d4139"). InnerVolumeSpecName "kube-api-access-wm8vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.920580 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.920630 5008 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.920642 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm8vj\" (UniqueName: \"kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:03 crc kubenswrapper[5008]: I0129 16:15:03.317694 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" event={"ID":"44e772a8-b044-4c03-a83a-4634997d4139","Type":"ContainerDied","Data":"cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c"} Jan 29 16:15:03 crc kubenswrapper[5008]: I0129 16:15:03.317739 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c" Jan 29 16:15:03 crc kubenswrapper[5008]: I0129 16:15:03.317749 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:03 crc kubenswrapper[5008]: I0129 16:15:03.724577 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4"] Jan 29 16:15:03 crc kubenswrapper[5008]: I0129 16:15:03.733042 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4"] Jan 29 16:15:05 crc kubenswrapper[5008]: E0129 16:15:05.326095 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:15:05 crc kubenswrapper[5008]: I0129 16:15:05.342488 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a912999-007c-495d-aaa3-857d76158a91" path="/var/lib/kubelet/pods/4a912999-007c-495d-aaa3-857d76158a91/volumes" Jan 29 16:15:12 crc kubenswrapper[5008]: E0129 16:15:12.325585 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:15:12 crc kubenswrapper[5008]: E0129 16:15:12.326240 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:15:13 crc kubenswrapper[5008]: E0129 16:15:13.326024 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:15:16 crc kubenswrapper[5008]: E0129 16:15:16.326270 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:15:18 crc kubenswrapper[5008]: E0129 16:15:18.325651 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:15:24 crc kubenswrapper[5008]: E0129 16:15:24.325867 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:15:24 crc kubenswrapper[5008]: E0129 16:15:24.325991 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:15:26 crc kubenswrapper[5008]: E0129 16:15:26.326160 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:15:27 crc kubenswrapper[5008]: I0129 16:15:27.227099 5008 scope.go:117] "RemoveContainer" containerID="74e48ee561dff74c0b937607b1d67f636544c839b5dfad578f5c993d847e004b" Jan 29 16:15:28 crc kubenswrapper[5008]: E0129 16:15:28.325870 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:15:31 crc kubenswrapper[5008]: E0129 16:15:31.330414 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:15:38 crc kubenswrapper[5008]: E0129 16:15:38.326654 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:15:39 crc kubenswrapper[5008]: E0129 16:15:39.326742 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:15:40 crc kubenswrapper[5008]: E0129 16:15:40.457836 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:15:40 crc kubenswrapper[5008]: E0129 16:15:40.458630 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7dqqz_openshift-marketplace(4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:15:40 crc kubenswrapper[5008]: E0129 16:15:40.460318 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:15:41 crc kubenswrapper[5008]: E0129 16:15:41.325617 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:15:43 crc kubenswrapper[5008]: I0129 16:15:43.990390 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:15:43 crc kubenswrapper[5008]: I0129 16:15:43.990703 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:15:44 crc kubenswrapper[5008]: E0129 16:15:44.326178 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:15:52 crc kubenswrapper[5008]: E0129 16:15:52.326278 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:15:53 crc kubenswrapper[5008]: E0129 16:15:53.325397 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:15:54 crc kubenswrapper[5008]: E0129 16:15:54.326321 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:15:54 crc kubenswrapper[5008]: E0129 16:15:54.327082 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:15:56 crc kubenswrapper[5008]: E0129 16:15:56.326879 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:16:04 crc kubenswrapper[5008]: E0129 16:16:04.326756 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:16:05 crc kubenswrapper[5008]: E0129 16:16:05.324316 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:16:08 crc kubenswrapper[5008]: E0129 16:16:08.326114 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:16:08 crc kubenswrapper[5008]: E0129 16:16:08.335547 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:16:10 crc kubenswrapper[5008]: E0129 16:16:10.325465 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:16:13 crc kubenswrapper[5008]: I0129 16:16:13.990299 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:16:13 crc kubenswrapper[5008]: I0129 16:16:13.991166 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:16:15 crc kubenswrapper[5008]: E0129 16:16:15.325646 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:16:19 crc kubenswrapper[5008]: E0129 16:16:19.325475 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:16:20 crc kubenswrapper[5008]: E0129 16:16:20.326036 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:16:20 crc kubenswrapper[5008]: E0129 16:16:20.326163 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:16:25 crc kubenswrapper[5008]: E0129 16:16:25.327652 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:16:29 crc kubenswrapper[5008]: E0129 16:16:29.325105 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:16:31 crc kubenswrapper[5008]: E0129 16:16:31.325902 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:16:32 crc kubenswrapper[5008]: E0129 16:16:32.326600 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:16:34 crc kubenswrapper[5008]: E0129 16:16:34.326296 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:16:37 crc kubenswrapper[5008]: E0129 16:16:37.337347 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:16:41 crc kubenswrapper[5008]: E0129 16:16:41.326193 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:16:43 crc kubenswrapper[5008]: I0129 16:16:43.990501 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:16:43 crc kubenswrapper[5008]: I0129 16:16:43.990857 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:16:43 crc kubenswrapper[5008]: I0129 16:16:43.990900 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 16:16:43 crc kubenswrapper[5008]: I0129 16:16:43.991625 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:16:43 crc kubenswrapper[5008]: I0129 16:16:43.991680 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957" gracePeriod=600 Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.163105 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957" exitCode=0 Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.163142 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957"} Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.163196 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.300285 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nvrh2/must-gather-f7qvt"] Jan 29 16:16:44 crc kubenswrapper[5008]: E0129 16:16:44.301403 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e772a8-b044-4c03-a83a-4634997d4139" containerName="collect-profiles" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.301424 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e772a8-b044-4c03-a83a-4634997d4139" containerName="collect-profiles" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.301678 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e772a8-b044-4c03-a83a-4634997d4139" containerName="collect-profiles" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.302761 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.307355 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nvrh2"/"openshift-service-ca.crt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.307740 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nvrh2"/"kube-root-ca.crt" Jan 29 16:16:44 crc kubenswrapper[5008]: E0129 16:16:44.329046 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.331355 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nvrh2/must-gather-f7qvt"] Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.372741 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tbc\" (UniqueName: \"kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.372840 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.475041 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5tbc\" (UniqueName: \"kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.475124 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.475990 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.497973 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5tbc\" (UniqueName: \"kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.632372 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:45 crc kubenswrapper[5008]: I0129 16:16:45.069963 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nvrh2/must-gather-f7qvt"] Jan 29 16:16:45 crc kubenswrapper[5008]: W0129 16:16:45.070375 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd320dd2e_14dc_4c54_86bf_25b5abd30dae.slice/crio-4e472c34cfa7d773a6e23ce027b20aac173cd5ea59646b458c8fe01c231b2b31 WatchSource:0}: Error finding container 4e472c34cfa7d773a6e23ce027b20aac173cd5ea59646b458c8fe01c231b2b31: Status 404 returned error can't find the container with id 4e472c34cfa7d773a6e23ce027b20aac173cd5ea59646b458c8fe01c231b2b31 Jan 29 16:16:45 crc kubenswrapper[5008]: I0129 16:16:45.177506 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec"} Jan 29 16:16:45 crc kubenswrapper[5008]: I0129 16:16:45.179143 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" event={"ID":"d320dd2e-14dc-4c54-86bf-25b5abd30dae","Type":"ContainerStarted","Data":"4e472c34cfa7d773a6e23ce027b20aac173cd5ea59646b458c8fe01c231b2b31"} Jan 29 16:16:46 crc kubenswrapper[5008]: E0129 16:16:46.325654 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:16:50 crc kubenswrapper[5008]: E0129 16:16:50.761096 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:16:52 crc kubenswrapper[5008]: E0129 16:16:52.932501 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:16:52 crc kubenswrapper[5008]: E0129 16:16:52.932502 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:16:54 crc kubenswrapper[5008]: I0129 16:16:54.266629 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" event={"ID":"d320dd2e-14dc-4c54-86bf-25b5abd30dae","Type":"ContainerStarted","Data":"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe"} Jan 29 16:16:55 crc kubenswrapper[5008]: I0129 16:16:55.281258 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" event={"ID":"d320dd2e-14dc-4c54-86bf-25b5abd30dae","Type":"ContainerStarted","Data":"ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce"} Jan 29 16:16:55 crc kubenswrapper[5008]: I0129 16:16:55.319060 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" podStartSLOduration=2.708850406 podStartE2EDuration="11.319043024s" podCreationTimestamp="2026-01-29 16:16:44 +0000 UTC" firstStartedPulling="2026-01-29 16:16:45.072278672 +0000 UTC m=+2948.745132909" lastFinishedPulling="2026-01-29 16:16:53.68247129 +0000 UTC m=+2957.355325527" observedRunningTime="2026-01-29 16:16:55.313270105 +0000 UTC m=+2958.986124342" watchObservedRunningTime="2026-01-29 16:16:55.319043024 +0000 UTC m=+2958.991897261" Jan 29 16:16:55 crc kubenswrapper[5008]: E0129 16:16:55.326322 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:16:58 crc kubenswrapper[5008]: E0129 16:16:58.328209 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:17:04 crc kubenswrapper[5008]: E0129 16:17:04.325191 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.308575 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nvrh2/crc-debug-wrjnm"] Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.310392 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.312597 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-nvrh2"/"default-dockercfg-r9q2j" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.394297 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.394370 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvl6w\" (UniqueName: \"kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.496184 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.496261 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvl6w\" (UniqueName: \"kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.496643 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.525428 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvl6w\" (UniqueName: \"kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.635528 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: W0129 16:17:05.666822 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8077b692_59d3_4065_8632_745ffcd783af.slice/crio-fafb9aae9ad84d964e9ac7b5fe41fe2d6341c2a5ab14aebcb1e10322b2b043fe WatchSource:0}: Error finding container fafb9aae9ad84d964e9ac7b5fe41fe2d6341c2a5ab14aebcb1e10322b2b043fe: Status 404 returned error can't find the container with id fafb9aae9ad84d964e9ac7b5fe41fe2d6341c2a5ab14aebcb1e10322b2b043fe Jan 29 16:17:06 crc kubenswrapper[5008]: I0129 16:17:06.394703 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" event={"ID":"8077b692-59d3-4065-8632-745ffcd783af","Type":"ContainerStarted","Data":"fafb9aae9ad84d964e9ac7b5fe41fe2d6341c2a5ab14aebcb1e10322b2b043fe"} Jan 29 16:17:06 crc kubenswrapper[5008]: E0129 16:17:06.449869 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:17:06 crc kubenswrapper[5008]: E0129 16:17:06.450016 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7dqqz_openshift-marketplace(4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:17:06 crc kubenswrapper[5008]: E0129 16:17:06.451260 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:17:07 crc kubenswrapper[5008]: E0129 16:17:07.332270 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:17:09 crc kubenswrapper[5008]: E0129 16:17:09.325754 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:17:18 crc kubenswrapper[5008]: E0129 16:17:18.816120 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:17:18 crc kubenswrapper[5008]: E0129 16:17:18.817172 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:17:19 crc kubenswrapper[5008]: E0129 16:17:19.325948 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:17:19 crc kubenswrapper[5008]: E0129 16:17:19.325997 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:17:19 crc kubenswrapper[5008]: I0129 16:17:19.502434 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" event={"ID":"8077b692-59d3-4065-8632-745ffcd783af","Type":"ContainerStarted","Data":"12f78f704b07eccfa0b429f65cb28772b19c0b10e53b2bfdda418b422bc2f249"} Jan 29 16:17:19 crc kubenswrapper[5008]: I0129 16:17:19.523130 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" podStartSLOduration=1.3095030730000001 podStartE2EDuration="14.523111221s" podCreationTimestamp="2026-01-29 16:17:05 +0000 UTC" firstStartedPulling="2026-01-29 16:17:05.670503408 +0000 UTC m=+2969.343357645" lastFinishedPulling="2026-01-29 16:17:18.884111556 +0000 UTC m=+2982.556965793" observedRunningTime="2026-01-29 16:17:19.514521813 +0000 UTC m=+2983.187376050" watchObservedRunningTime="2026-01-29 16:17:19.523111221 +0000 UTC m=+2983.195965448" Jan 29 16:17:20 crc kubenswrapper[5008]: E0129 16:17:20.326550 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:17:21 crc kubenswrapper[5008]: I0129 16:17:21.520460 5008 generic.go:334] "Generic (PLEG): container finished" podID="8077b692-59d3-4065-8632-745ffcd783af" containerID="12f78f704b07eccfa0b429f65cb28772b19c0b10e53b2bfdda418b422bc2f249" exitCode=125 Jan 29 16:17:21 crc kubenswrapper[5008]: I0129 16:17:21.520546 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" event={"ID":"8077b692-59d3-4065-8632-745ffcd783af","Type":"ContainerDied","Data":"12f78f704b07eccfa0b429f65cb28772b19c0b10e53b2bfdda418b422bc2f249"} Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.645301 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.682762 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nvrh2/crc-debug-wrjnm"] Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.693125 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nvrh2/crc-debug-wrjnm"] Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.726638 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvl6w\" (UniqueName: \"kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w\") pod \"8077b692-59d3-4065-8632-745ffcd783af\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.726697 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host\") pod \"8077b692-59d3-4065-8632-745ffcd783af\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.727523 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host" (OuterVolumeSpecName: "host") pod "8077b692-59d3-4065-8632-745ffcd783af" (UID: "8077b692-59d3-4065-8632-745ffcd783af"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.745716 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w" (OuterVolumeSpecName: "kube-api-access-cvl6w") pod "8077b692-59d3-4065-8632-745ffcd783af" (UID: "8077b692-59d3-4065-8632-745ffcd783af"). InnerVolumeSpecName "kube-api-access-cvl6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.829390 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvl6w\" (UniqueName: \"kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.829434 5008 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:23 crc kubenswrapper[5008]: I0129 16:17:23.334353 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8077b692-59d3-4065-8632-745ffcd783af" path="/var/lib/kubelet/pods/8077b692-59d3-4065-8632-745ffcd783af/volumes" Jan 29 16:17:23 crc kubenswrapper[5008]: I0129 16:17:23.538587 5008 scope.go:117] "RemoveContainer" containerID="12f78f704b07eccfa0b429f65cb28772b19c0b10e53b2bfdda418b422bc2f249" Jan 29 16:17:23 crc kubenswrapper[5008]: I0129 16:17:23.538618 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:31 crc kubenswrapper[5008]: E0129 16:17:31.326127 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:17:32 crc kubenswrapper[5008]: E0129 16:17:32.327026 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:17:32 crc kubenswrapper[5008]: E0129 16:17:32.327079 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:17:34 crc kubenswrapper[5008]: E0129 16:17:34.326130 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:17:34 crc kubenswrapper[5008]: E0129 16:17:34.326951 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:17:46 crc kubenswrapper[5008]: E0129 16:17:46.326407 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:17:46 crc kubenswrapper[5008]: E0129 16:17:46.326447 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:17:46 crc kubenswrapper[5008]: E0129 16:17:46.326694 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:17:47 crc kubenswrapper[5008]: E0129 16:17:47.330693 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:17:48 crc kubenswrapper[5008]: E0129 16:17:48.325322 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:17:58 crc kubenswrapper[5008]: E0129 16:17:58.326447 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:17:59 crc kubenswrapper[5008]: E0129 16:17:59.325868 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:18:01 crc kubenswrapper[5008]: E0129 16:18:01.326418 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:18:01 crc kubenswrapper[5008]: E0129 16:18:01.327089 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:18:02 crc kubenswrapper[5008]: E0129 16:18:02.326252 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:18:06 crc kubenswrapper[5008]: I0129 16:18:06.614113 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f9c9f8766-4lf97_ce981b8e-ff53-48ad-b44e-b150c0b1b80f/barbican-api/0.log" Jan 29 16:18:06 crc kubenswrapper[5008]: I0129 16:18:06.731935 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f9c9f8766-4lf97_ce981b8e-ff53-48ad-b44e-b150c0b1b80f/barbican-api-log/0.log" Jan 29 16:18:06 crc kubenswrapper[5008]: I0129 16:18:06.838569 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d5688bfcd-94rkm_24c4cc25-9e50-4601-bac2-552e1aded799/barbican-keystone-listener/0.log" Jan 29 16:18:06 crc kubenswrapper[5008]: I0129 16:18:06.930364 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d5688bfcd-94rkm_24c4cc25-9e50-4601-bac2-552e1aded799/barbican-keystone-listener-log/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.008702 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c46c758ff-5p4jl_f77f54f0-02b9-4082-8a76-dc78a9b7d08c/barbican-worker/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.066420 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c46c758ff-5p4jl_f77f54f0-02b9-4082-8a76-dc78a9b7d08c/barbican-worker-log/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.220157 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d40740f9-e8d8-4f46-b8b0-d913a6c33210/ceilometer-central-agent/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.286670 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d40740f9-e8d8-4f46-b8b0-d913a6c33210/ceilometer-notification-agent/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.360047 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d40740f9-e8d8-4f46-b8b0-d913a6c33210/sg-core/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.498352 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2f60d298-c33b-44b3-a99c-a0e75a321a80/cinder-api/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.502960 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2f60d298-c33b-44b3-a99c-a0e75a321a80/cinder-api-log/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.646136 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2c4e7961-5802-47c7-becf-75dd01d6e7d1/cinder-scheduler/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.716547 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2c4e7961-5802-47c7-becf-75dd01d6e7d1/probe/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.796690 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-ttnd7_ffdf9dd1-5826-4e41-90ba-770e9ae42cc2/init/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.996940 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-ttnd7_ffdf9dd1-5826-4e41-90ba-770e9ae42cc2/init/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.073137 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-ttnd7_ffdf9dd1-5826-4e41-90ba-770e9ae42cc2/dnsmasq-dns/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.183407 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b210097f-985c-4014-a76e-b430ef390fce/glance-httpd/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.300908 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b210097f-985c-4014-a76e-b430ef390fce/glance-log/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.368416 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d30face9-2636-4cb7-8e84-8558b7b40df4/glance-httpd/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.369558 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d30face9-2636-4cb7-8e84-8558b7b40df4/glance-log/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.683463 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-bf5f5fc4b-t9vk7_fc599e48-62d0-4908-b4ed-cd3f13094665/horizon/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.842324 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-779d6696cc-ltp9g_4732d1d7-c3d2-4f17-bf74-d92f350a3e2b/keystone-api/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.874759 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-bf5f5fc4b-t9vk7_fc599e48-62d0-4908-b4ed-cd3f13094665/horizon-log/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.880798 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29495041-5xjnv_3b2cbc69-268a-4c30-b9c0-d1352f380259/keystone-cron/0.log" Jan 29 16:18:09 crc kubenswrapper[5008]: I0129 16:18:09.094585 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2691fca5-fe1e-4796-bf43-7135e9d5a198/kube-state-metrics/0.log" Jan 29 16:18:09 crc kubenswrapper[5008]: I0129 16:18:09.367343 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-98cff5df-8qpcl_6bf14a27-dc0a-430e-affa-a6a28e944947/neutron-httpd/0.log" Jan 29 16:18:09 crc kubenswrapper[5008]: I0129 16:18:09.375971 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-98cff5df-8qpcl_6bf14a27-dc0a-430e-affa-a6a28e944947/neutron-api/0.log" Jan 29 16:18:09 crc kubenswrapper[5008]: I0129 16:18:09.823677 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ffff5fc1-f4be-4fad-bfa8-890ea58d2a00/nova-api-log/0.log" Jan 29 16:18:09 crc kubenswrapper[5008]: I0129 16:18:09.875699 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ffff5fc1-f4be-4fad-bfa8-890ea58d2a00/nova-api-api/0.log" Jan 29 16:18:10 crc kubenswrapper[5008]: I0129 16:18:10.144663 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fc7804a1-e957-4095-b882-901a403bce11/nova-cell0-conductor-conductor/0.log" Jan 29 16:18:10 crc kubenswrapper[5008]: E0129 16:18:10.329028 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:18:10 crc kubenswrapper[5008]: I0129 16:18:10.446190 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1a40e352-7353-41e6-8c6e-58b7beca8ab9/nova-cell1-conductor-conductor/0.log" Jan 29 16:18:10 crc kubenswrapper[5008]: I0129 16:18:10.503545 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_21ca19b4-0317-4b08-8dc2-a4295c2fb8e4/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 16:18:10 crc kubenswrapper[5008]: I0129 16:18:10.814302 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a4470533-b658-46fe-8749-f371b22703b2/nova-metadata-log/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.194577 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f6caa062-78b8-42ad-a655-6828f63a7e8f/nova-scheduler-scheduler/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.216918 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2c8d6871-1129-4597-8a1e-94006a17448a/mysql-bootstrap/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: E0129 16:18:11.326194 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.414256 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2c8d6871-1129-4597-8a1e-94006a17448a/mysql-bootstrap/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.448592 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2c8d6871-1129-4597-8a1e-94006a17448a/galera/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.596457 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a4470533-b658-46fe-8749-f371b22703b2/nova-metadata-metadata/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.623369 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2958b99-a5fe-447a-93cc-64bade998854/mysql-bootstrap/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.961389 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2958b99-a5fe-447a-93cc-64bade998854/mysql-bootstrap/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.022636 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3b26c725-8ee1-4144-baa0-a4a85bb7e1d2/openstackclient/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.040979 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2958b99-a5fe-447a-93cc-64bade998854/galera/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.289138 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-bw9wr_0dd702c8-269b-4fb6-a3a7-03adf93d916a/ovn-controller/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: E0129 16:18:12.326154 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.344004 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qkf4v_90c13843-e314-4465-af68-367fc8d59731/openstack-network-exporter/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.509091 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5zwb_fb07a603-1696-4378-8d99-382d5bc152da/ovsdb-server-init/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.775160 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5zwb_fb07a603-1696-4378-8d99-382d5bc152da/ovsdb-server-init/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.791569 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5zwb_fb07a603-1696-4378-8d99-382d5bc152da/ovs-vswitchd/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.853753 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5zwb_fb07a603-1696-4378-8d99-382d5bc152da/ovsdb-server/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.029865 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f251affb-8e6d-445d-996c-da5e3fc29de8/openstack-network-exporter/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.057775 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f251affb-8e6d-445d-996c-da5e3fc29de8/ovn-northd/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.171392 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4d502938-9e22-4a6c-951e-b476cb87ee8f/openstack-network-exporter/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.248066 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4d502938-9e22-4a6c-951e-b476cb87ee8f/ovsdbserver-nb/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.353242 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106/ovsdbserver-sb/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.399234 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106/openstack-network-exporter/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.643701 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55d9fbf66-r5kj8_85024049-9e4b-4814-a617-cd17614f2a80/placement-api/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.660759 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55d9fbf66-r5kj8_85024049-9e4b-4814-a617-cd17614f2a80/placement-log/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.768416 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4dcd0990-beb1-445a-b387-b2b78c1a39d2/setup-container/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.942578 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4dcd0990-beb1-445a-b387-b2b78c1a39d2/setup-container/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.041447 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4dcd0990-beb1-445a-b387-b2b78c1a39d2/rabbitmq/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.052534 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8c8683a3-18f6-4242-9991-b542aed9143b/setup-container/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.311814 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8c8683a3-18f6-4242-9991-b542aed9143b/setup-container/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.315746 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8c8683a3-18f6-4242-9991-b542aed9143b/rabbitmq/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: E0129 16:18:14.325044 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.414073 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c6fbdb57f-zvhpz_64c08f63-12a2-4dfb-b96d-0a12e9725021/proxy-httpd/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.547717 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c6fbdb57f-zvhpz_64c08f63-12a2-4dfb-b96d-0a12e9725021/proxy-server/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.574441 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-phmts_5b273a50-b2db-40d5-b4b4-6494206c606d/swift-ring-rebalance/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.772013 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/account-auditor/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.851326 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/account-reaper/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.867821 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/account-replicator/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.940681 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/account-server/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.013490 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/container-auditor/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.105048 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/container-server/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.109050 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/container-replicator/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.147618 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/container-updater/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.270837 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/object-auditor/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: E0129 16:18:15.326493 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.351470 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/object-expirer/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.374446 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/object-replicator/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.394949 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/object-server/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.466697 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/object-updater/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.589402 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/swift-recon-cron/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.624007 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/rsync/0.log" Jan 29 16:18:18 crc kubenswrapper[5008]: I0129 16:18:18.520479 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b37ef43d-23ae-4a9c-af60-e616882400c3/memcached/0.log" Jan 29 16:18:23 crc kubenswrapper[5008]: E0129 16:18:23.332797 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:18:24 crc kubenswrapper[5008]: E0129 16:18:24.326178 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:18:24 crc kubenswrapper[5008]: E0129 16:18:24.326217 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:18:27 crc kubenswrapper[5008]: E0129 16:18:27.343360 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:18:28 crc kubenswrapper[5008]: I0129 16:18:28.325123 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:18:30 crc kubenswrapper[5008]: I0129 16:18:30.063995 5008 generic.go:334] "Generic (PLEG): container finished" podID="2ed48245-be09-46c8-97f9-263179717512" containerID="2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2" exitCode=0 Jan 29 16:18:30 crc kubenswrapper[5008]: I0129 16:18:30.064643 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerDied","Data":"2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2"} Jan 29 16:18:31 crc kubenswrapper[5008]: I0129 16:18:31.079552 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerStarted","Data":"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5"} Jan 29 16:18:31 crc kubenswrapper[5008]: I0129 16:18:31.106188 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6qmv7" podStartSLOduration=2.670004418 podStartE2EDuration="11m9.106165272s" podCreationTimestamp="2026-01-29 16:07:22 +0000 UTC" firstStartedPulling="2026-01-29 16:07:24.255197834 +0000 UTC m=+2387.928052071" lastFinishedPulling="2026-01-29 16:18:30.691358688 +0000 UTC m=+3054.364212925" observedRunningTime="2026-01-29 16:18:31.098255371 +0000 UTC m=+3054.771109638" watchObservedRunningTime="2026-01-29 16:18:31.106165272 +0000 UTC m=+3054.779019509" Jan 29 16:18:33 crc kubenswrapper[5008]: I0129 16:18:33.050679 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:33 crc kubenswrapper[5008]: I0129 16:18:33.051038 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:33 crc kubenswrapper[5008]: I0129 16:18:33.102906 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:36 crc kubenswrapper[5008]: E0129 16:18:36.326900 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:18:37 crc kubenswrapper[5008]: E0129 16:18:37.332199 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:18:38 crc kubenswrapper[5008]: I0129 16:18:38.900958 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/util/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.101635 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/pull/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.125665 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/util/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.154653 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerStarted","Data":"9e74ba55685ef91dc5c5fd4f75d0c04e6a02240db3ef22d23b01c38947545bf7"} Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.155103 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.172856 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/pull/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.181681 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.772613144 podStartE2EDuration="26m8.181661737s" podCreationTimestamp="2026-01-29 15:52:31 +0000 UTC" firstStartedPulling="2026-01-29 15:52:32.537977257 +0000 UTC m=+1496.210831494" lastFinishedPulling="2026-01-29 16:18:37.94702585 +0000 UTC m=+3061.619880087" observedRunningTime="2026-01-29 16:18:39.176070531 +0000 UTC m=+3062.848924768" watchObservedRunningTime="2026-01-29 16:18:39.181661737 +0000 UTC m=+3062.854515974" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.372370 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/util/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.379374 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/pull/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.381985 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/extract/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.662678 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-hh7sg_68468eb9-9e76-4f2f-9aba-cc3198e0a241/manager/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.666934 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-4zrsr_6e775178-095e-451d-bded-b83f229c4231/manager/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.842574 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-n4xtj_7a610d2e-cb71-4995-a0e8-f6dc26f7664a/manager/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.946260 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-s4fq5_94a4547d-0c92-41e4-8ca7-64e21df1708e/manager/0.log" Jan 29 16:18:40 crc kubenswrapper[5008]: I0129 16:18:40.255152 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-9sf7f_b46e3eea-2330-4b3f-b45d-34ae38a0dde9/manager/0.log" Jan 29 16:18:40 crc kubenswrapper[5008]: I0129 16:18:40.421968 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-qs9wh_cae67616-1145-4057-b304-08a322e78d9d/manager/0.log" Jan 29 16:18:40 crc kubenswrapper[5008]: I0129 16:18:40.658281 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-ncxxj_6196a4fd-8576-412f-9140-cf61b98444a4/manager/0.log" Jan 29 16:18:40 crc kubenswrapper[5008]: I0129 16:18:40.969723 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-q7khh_e57e9a97-d32e-4464-b12c-ba44a4643ada/manager/0.log" Jan 29 16:18:40 crc kubenswrapper[5008]: I0129 16:18:40.983524 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-zvcs5_4ff89cd9-951e-4907-b60c-a1a1c08007a4/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: E0129 16:18:41.326521 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.426716 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-qhwnb_e76346a9-7ba5-4178-82b7-da9f0c337c08/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.526724 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-bjjwz_d39876a5-4ca3-44e2-a4c5-c6541c2ec812/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.572736 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-44qcp_14020423-5911-4b69-8889-b12267c9bbf9/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.662021 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-klqvj_27a92a88-ee29-47fd-b4cf-5e3232ce7573/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.746704 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-zbddd_4dc123ee-b76c-46a7-9aea-76457232036b/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.875801 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv_9f5d1ef8-a9b5-428a-b441-b7d763dbd102/manager/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.215862 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d9fb954d-qlkhn_9edb96c4-66c6-464b-8dd3-089d6be05a60/operator/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.297880 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-lv8km_cdce8b7e-15b6-41ae-89f3-fd69472b9800/registry-server/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.879730 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vtv85_1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1/operator/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.919609 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-77db58b9dd-srsvv_44442d63-1bbc-4d1c-9e9d-2a9ad59baf59/manager/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.929347 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-xjf4m_ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f/manager/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.947425 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-qjtzq_cb2d6253-7fa7-41a9-9d0b-002ef590c4db/manager/0.log" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.110280 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-84h7l_a9dfe223-8569-48bb-8b52-c3fb069208a0/manager/0.log" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.113048 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.175407 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.226864 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="registry-server" containerID="cri-o://46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5" gracePeriod=2 Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.252672 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-bbsft_30b3e5fd-7f41-4ed9-a1de-cb282994ad38/manager/0.log" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.384695 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-fxz5k_d4fd527b-7108-4f94-b7a9-bb0b358b8c3c/manager/0.log" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.443203 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-dwhc5_a2163508-5800-4d97-b8d4-1f3815764822/manager/0.log" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.715513 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.770186 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities\") pod \"2ed48245-be09-46c8-97f9-263179717512\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.770255 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content\") pod \"2ed48245-be09-46c8-97f9-263179717512\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.770398 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29db7\" (UniqueName: \"kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7\") pod \"2ed48245-be09-46c8-97f9-263179717512\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.770891 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities" (OuterVolumeSpecName: "utilities") pod "2ed48245-be09-46c8-97f9-263179717512" (UID: "2ed48245-be09-46c8-97f9-263179717512"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.771126 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.776967 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7" (OuterVolumeSpecName: "kube-api-access-29db7") pod "2ed48245-be09-46c8-97f9-263179717512" (UID: "2ed48245-be09-46c8-97f9-263179717512"). InnerVolumeSpecName "kube-api-access-29db7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.832437 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ed48245-be09-46c8-97f9-263179717512" (UID: "2ed48245-be09-46c8-97f9-263179717512"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.873558 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29db7\" (UniqueName: \"kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.873598 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.238931 5008 generic.go:334] "Generic (PLEG): container finished" podID="2ed48245-be09-46c8-97f9-263179717512" containerID="46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5" exitCode=0 Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.238998 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerDied","Data":"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5"} Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.239323 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerDied","Data":"4e824484315a6e30506a2f7c7fb618d142a68d99bd3176c0a282d8bafa44de26"} Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.239345 5008 scope.go:117] "RemoveContainer" containerID="46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.239061 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.272026 5008 scope.go:117] "RemoveContainer" containerID="2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.275577 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.283902 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.324324 5008 scope.go:117] "RemoveContainer" containerID="150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.360031 5008 scope.go:117] "RemoveContainer" containerID="46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.360493 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5\": container with ID starting with 46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5 not found: ID does not exist" containerID="46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.360534 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5"} err="failed to get container status \"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5\": rpc error: code = NotFound desc = could not find container \"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5\": container with ID starting with 46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5 not found: ID does not exist" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.360564 5008 scope.go:117] "RemoveContainer" containerID="2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.361751 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2\": container with ID starting with 2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2 not found: ID does not exist" containerID="2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.361783 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2"} err="failed to get container status \"2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2\": rpc error: code = NotFound desc = could not find container \"2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2\": container with ID starting with 2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2 not found: ID does not exist" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.361817 5008 scope.go:117] "RemoveContainer" containerID="150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.362137 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766\": container with ID starting with 150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766 not found: ID does not exist" containerID="150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.362158 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766"} err="failed to get container status \"150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766\": rpc error: code = NotFound desc = could not find container \"150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766\": container with ID starting with 150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766 not found: ID does not exist" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.769715 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.770166 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="extract-utilities" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770198 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="extract-utilities" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.770215 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="extract-content" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770222 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="extract-content" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.770233 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8077b692-59d3-4065-8632-745ffcd783af" containerName="container-00" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770240 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8077b692-59d3-4065-8632-745ffcd783af" containerName="container-00" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.770267 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="registry-server" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770272 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="registry-server" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770437 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="registry-server" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770459 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8077b692-59d3-4065-8632-745ffcd783af" containerName="container-00" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.771828 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.789296 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.789392 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmvrh\" (UniqueName: \"kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.789456 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.791571 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.891865 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmvrh\" (UniqueName: \"kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.892663 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.893049 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.893442 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.893647 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.911821 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmvrh\" (UniqueName: \"kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:45 crc kubenswrapper[5008]: I0129 16:18:45.092516 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:45 crc kubenswrapper[5008]: I0129 16:18:45.337650 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed48245-be09-46c8-97f9-263179717512" path="/var/lib/kubelet/pods/2ed48245-be09-46c8-97f9-263179717512/volumes" Jan 29 16:18:45 crc kubenswrapper[5008]: I0129 16:18:45.503515 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:46 crc kubenswrapper[5008]: I0129 16:18:46.257066 5008 generic.go:334] "Generic (PLEG): container finished" podID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerID="6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f" exitCode=0 Jan 29 16:18:46 crc kubenswrapper[5008]: I0129 16:18:46.257135 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerDied","Data":"6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f"} Jan 29 16:18:46 crc kubenswrapper[5008]: I0129 16:18:46.257407 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerStarted","Data":"58ba521f62d36bc6f1b5a187281d524b755d3e2cc08d3d128a1d342bd7761433"} Jan 29 16:18:48 crc kubenswrapper[5008]: I0129 16:18:48.276070 5008 generic.go:334] "Generic (PLEG): container finished" podID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerID="b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803" exitCode=0 Jan 29 16:18:48 crc kubenswrapper[5008]: I0129 16:18:48.276187 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerDied","Data":"b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803"} Jan 29 16:18:49 crc kubenswrapper[5008]: I0129 16:18:49.285105 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerStarted","Data":"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca"} Jan 29 16:18:49 crc kubenswrapper[5008]: I0129 16:18:49.311123 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m2ch2" podStartSLOduration=2.839254312 podStartE2EDuration="5.311098956s" podCreationTimestamp="2026-01-29 16:18:44 +0000 UTC" firstStartedPulling="2026-01-29 16:18:46.258911836 +0000 UTC m=+3069.931766073" lastFinishedPulling="2026-01-29 16:18:48.73075649 +0000 UTC m=+3072.403610717" observedRunningTime="2026-01-29 16:18:49.302030685 +0000 UTC m=+3072.974884942" watchObservedRunningTime="2026-01-29 16:18:49.311098956 +0000 UTC m=+3072.983953213" Jan 29 16:18:50 crc kubenswrapper[5008]: I0129 16:18:50.297476 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerStarted","Data":"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498"} Jan 29 16:18:51 crc kubenswrapper[5008]: I0129 16:18:51.306274 5008 generic.go:334] "Generic (PLEG): container finished" podID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerID="55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498" exitCode=0 Jan 29 16:18:51 crc kubenswrapper[5008]: I0129 16:18:51.306365 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerDied","Data":"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498"} Jan 29 16:18:52 crc kubenswrapper[5008]: I0129 16:18:52.318376 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerStarted","Data":"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d"} Jan 29 16:18:52 crc kubenswrapper[5008]: E0129 16:18:52.325769 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:18:52 crc kubenswrapper[5008]: I0129 16:18:52.343279 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9lmvr" podStartSLOduration=2.117183881 podStartE2EDuration="10m57.343256969s" podCreationTimestamp="2026-01-29 16:07:55 +0000 UTC" firstStartedPulling="2026-01-29 16:07:56.537162164 +0000 UTC m=+2420.210016401" lastFinishedPulling="2026-01-29 16:18:51.763235252 +0000 UTC m=+3075.436089489" observedRunningTime="2026-01-29 16:18:52.34000595 +0000 UTC m=+3076.012860207" watchObservedRunningTime="2026-01-29 16:18:52.343256969 +0000 UTC m=+3076.016111206" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.092927 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.093520 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.140175 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.396004 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.615758 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.615885 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.661952 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:56 crc kubenswrapper[5008]: I0129 16:18:56.154132 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:56 crc kubenswrapper[5008]: E0129 16:18:56.326126 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:18:56 crc kubenswrapper[5008]: I0129 16:18:56.406155 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:57 crc kubenswrapper[5008]: I0129 16:18:57.357406 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m2ch2" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="registry-server" containerID="cri-o://f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca" gracePeriod=2 Jan 29 16:18:57 crc kubenswrapper[5008]: I0129 16:18:57.953532 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:18:58 crc kubenswrapper[5008]: I0129 16:18:58.364617 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="registry-server" containerID="cri-o://e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d" gracePeriod=2 Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.065735 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.175129 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content\") pod \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.175175 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities\") pod \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.175210 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmvrh\" (UniqueName: \"kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh\") pod \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.177402 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities" (OuterVolumeSpecName: "utilities") pod "8288f5b4-361c-4f53-bcc9-5ec9a42464cb" (UID: "8288f5b4-361c-4f53-bcc9-5ec9a42464cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.183051 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh" (OuterVolumeSpecName: "kube-api-access-qmvrh") pod "8288f5b4-361c-4f53-bcc9-5ec9a42464cb" (UID: "8288f5b4-361c-4f53-bcc9-5ec9a42464cb"). InnerVolumeSpecName "kube-api-access-qmvrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.235412 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8288f5b4-361c-4f53-bcc9-5ec9a42464cb" (UID: "8288f5b4-361c-4f53-bcc9-5ec9a42464cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.277106 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.277161 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.277176 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmvrh\" (UniqueName: \"kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.279577 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.376250 5008 generic.go:334] "Generic (PLEG): container finished" podID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerID="e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d" exitCode=0 Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.376297 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerDied","Data":"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d"} Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.376354 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.376691 5008 scope.go:117] "RemoveContainer" containerID="e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.377912 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerDied","Data":"3027721e802c941c68316a40edc4f5165c2ccf1c65e058c580444ac3144242da"} Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.379737 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcgl4\" (UniqueName: \"kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4\") pod \"0cf4cf5b-529f-49a9-900c-a94b840568d8\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.379804 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content\") pod \"0cf4cf5b-529f-49a9-900c-a94b840568d8\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.379955 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities\") pod \"0cf4cf5b-529f-49a9-900c-a94b840568d8\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.380132 5008 generic.go:334] "Generic (PLEG): container finished" podID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerID="f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca" exitCode=0 Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.380166 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerDied","Data":"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca"} Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.380193 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerDied","Data":"58ba521f62d36bc6f1b5a187281d524b755d3e2cc08d3d128a1d342bd7761433"} Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.380278 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.381616 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities" (OuterVolumeSpecName: "utilities") pod "0cf4cf5b-529f-49a9-900c-a94b840568d8" (UID: "0cf4cf5b-529f-49a9-900c-a94b840568d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.385318 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4" (OuterVolumeSpecName: "kube-api-access-gcgl4") pod "0cf4cf5b-529f-49a9-900c-a94b840568d8" (UID: "0cf4cf5b-529f-49a9-900c-a94b840568d8"). InnerVolumeSpecName "kube-api-access-gcgl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.402398 5008 scope.go:117] "RemoveContainer" containerID="55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.411301 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.420504 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.430371 5008 scope.go:117] "RemoveContainer" containerID="4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.432504 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cf4cf5b-529f-49a9-900c-a94b840568d8" (UID: "0cf4cf5b-529f-49a9-900c-a94b840568d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.451931 5008 scope.go:117] "RemoveContainer" containerID="e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.452419 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d\": container with ID starting with e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d not found: ID does not exist" containerID="e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.452471 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d"} err="failed to get container status \"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d\": rpc error: code = NotFound desc = could not find container \"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d\": container with ID starting with e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.452507 5008 scope.go:117] "RemoveContainer" containerID="55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.453079 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498\": container with ID starting with 55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498 not found: ID does not exist" containerID="55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.453106 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498"} err="failed to get container status \"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498\": rpc error: code = NotFound desc = could not find container \"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498\": container with ID starting with 55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498 not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.453124 5008 scope.go:117] "RemoveContainer" containerID="4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.453427 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108\": container with ID starting with 4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108 not found: ID does not exist" containerID="4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.453458 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108"} err="failed to get container status \"4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108\": rpc error: code = NotFound desc = could not find container \"4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108\": container with ID starting with 4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108 not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.453476 5008 scope.go:117] "RemoveContainer" containerID="f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.474589 5008 scope.go:117] "RemoveContainer" containerID="b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.482426 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcgl4\" (UniqueName: \"kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.482503 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.482514 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.501105 5008 scope.go:117] "RemoveContainer" containerID="6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.554766 5008 scope.go:117] "RemoveContainer" containerID="f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.555301 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca\": container with ID starting with f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca not found: ID does not exist" containerID="f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.555352 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca"} err="failed to get container status \"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca\": rpc error: code = NotFound desc = could not find container \"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca\": container with ID starting with f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.555385 5008 scope.go:117] "RemoveContainer" containerID="b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.560060 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803\": container with ID starting with b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803 not found: ID does not exist" containerID="b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.560139 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803"} err="failed to get container status \"b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803\": rpc error: code = NotFound desc = could not find container \"b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803\": container with ID starting with b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803 not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.560186 5008 scope.go:117] "RemoveContainer" containerID="6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.560922 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f\": container with ID starting with 6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f not found: ID does not exist" containerID="6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.560989 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f"} err="failed to get container status \"6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f\": rpc error: code = NotFound desc = could not find container \"6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f\": container with ID starting with 6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.708537 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.715824 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.366059 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.366742 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="extract-content" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.366759 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="extract-content" Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.366771 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="extract-utilities" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.369661 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="extract-utilities" Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.369774 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.369800 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.369824 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.369831 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.369849 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="extract-content" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.369863 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="extract-content" Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.369916 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="extract-utilities" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.369923 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="extract-utilities" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.370456 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.370472 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.372366 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.400903 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.401037 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbn7k\" (UniqueName: \"kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.401105 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.403936 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.503448 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbn7k\" (UniqueName: \"kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.503546 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.503654 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.504148 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.504158 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.522681 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbn7k\" (UniqueName: \"kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.696109 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:01 crc kubenswrapper[5008]: W0129 16:19:01.207917 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5409ba7c_5123_492a_a8d6_230022150d55.slice/crio-604777d2fef66e9fcb2db8ebbf9e755a893c019f9d051a093e450298bdc86dfa WatchSource:0}: Error finding container 604777d2fef66e9fcb2db8ebbf9e755a893c019f9d051a093e450298bdc86dfa: Status 404 returned error can't find the container with id 604777d2fef66e9fcb2db8ebbf9e755a893c019f9d051a093e450298bdc86dfa Jan 29 16:19:01 crc kubenswrapper[5008]: I0129 16:19:01.210677 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:01 crc kubenswrapper[5008]: I0129 16:19:01.342760 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" path="/var/lib/kubelet/pods/0cf4cf5b-529f-49a9-900c-a94b840568d8/volumes" Jan 29 16:19:01 crc kubenswrapper[5008]: I0129 16:19:01.344083 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" path="/var/lib/kubelet/pods/8288f5b4-361c-4f53-bcc9-5ec9a42464cb/volumes" Jan 29 16:19:01 crc kubenswrapper[5008]: I0129 16:19:01.403663 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerStarted","Data":"604777d2fef66e9fcb2db8ebbf9e755a893c019f9d051a093e450298bdc86dfa"} Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.072156 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.412299 5008 generic.go:334] "Generic (PLEG): container finished" podID="5409ba7c-5123-492a-a8d6-230022150d55" containerID="2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e" exitCode=0 Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.412353 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerDied","Data":"2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e"} Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.561600 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-x9bx7_cf3d6df4-e07e-4d72-b2b6-20dcb29700d7/control-plane-machine-set-operator/0.log" Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.776326 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fsx74_6db03bb1-4833-4d3f-82d5-08ec5710251f/kube-rbac-proxy/0.log" Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.807203 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fsx74_6db03bb1-4833-4d3f-82d5-08ec5710251f/machine-api-operator/0.log" Jan 29 16:19:03 crc kubenswrapper[5008]: I0129 16:19:03.423765 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerStarted","Data":"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f"} Jan 29 16:19:04 crc kubenswrapper[5008]: I0129 16:19:04.438417 5008 generic.go:334] "Generic (PLEG): container finished" podID="5409ba7c-5123-492a-a8d6-230022150d55" containerID="f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f" exitCode=0 Jan 29 16:19:04 crc kubenswrapper[5008]: I0129 16:19:04.439011 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerDied","Data":"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f"} Jan 29 16:19:05 crc kubenswrapper[5008]: E0129 16:19:05.328953 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:19:05 crc kubenswrapper[5008]: I0129 16:19:05.460480 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerStarted","Data":"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9"} Jan 29 16:19:05 crc kubenswrapper[5008]: I0129 16:19:05.483402 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4knpg" podStartSLOduration=3.056245486 podStartE2EDuration="5.483384676s" podCreationTimestamp="2026-01-29 16:19:00 +0000 UTC" firstStartedPulling="2026-01-29 16:19:02.41490054 +0000 UTC m=+3086.087754777" lastFinishedPulling="2026-01-29 16:19:04.84203973 +0000 UTC m=+3088.514893967" observedRunningTime="2026-01-29 16:19:05.47941334 +0000 UTC m=+3089.152267597" watchObservedRunningTime="2026-01-29 16:19:05.483384676 +0000 UTC m=+3089.156238903" Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.290618 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.291749 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2691fca5-fe1e-4796-bf43-7135e9d5a198" containerName="kube-state-metrics" containerID="cri-o://9e1a6f84d62e1a65b8306defe6e32b9e1a35b50bcd62a48cbe68e10cb95676c7" gracePeriod=30 Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.472674 5008 generic.go:334] "Generic (PLEG): container finished" podID="2691fca5-fe1e-4796-bf43-7135e9d5a198" containerID="9e1a6f84d62e1a65b8306defe6e32b9e1a35b50bcd62a48cbe68e10cb95676c7" exitCode=2 Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.472748 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2691fca5-fe1e-4796-bf43-7135e9d5a198","Type":"ContainerDied","Data":"9e1a6f84d62e1a65b8306defe6e32b9e1a35b50bcd62a48cbe68e10cb95676c7"} Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.782694 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.863131 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzp55\" (UniqueName: \"kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55\") pod \"2691fca5-fe1e-4796-bf43-7135e9d5a198\" (UID: \"2691fca5-fe1e-4796-bf43-7135e9d5a198\") " Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.869340 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55" (OuterVolumeSpecName: "kube-api-access-hzp55") pod "2691fca5-fe1e-4796-bf43-7135e9d5a198" (UID: "2691fca5-fe1e-4796-bf43-7135e9d5a198"). InnerVolumeSpecName "kube-api-access-hzp55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.965055 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzp55\" (UniqueName: \"kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.482689 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2691fca5-fe1e-4796-bf43-7135e9d5a198","Type":"ContainerDied","Data":"7986044eeb1cbc11c730082d941ee043dc7374de8a33bf15addb097a4c50eaac"} Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.482745 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.483048 5008 scope.go:117] "RemoveContainer" containerID="9e1a6f84d62e1a65b8306defe6e32b9e1a35b50bcd62a48cbe68e10cb95676c7" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.508841 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.519868 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.532355 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:07 crc kubenswrapper[5008]: E0129 16:19:07.532760 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2691fca5-fe1e-4796-bf43-7135e9d5a198" containerName="kube-state-metrics" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.532792 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2691fca5-fe1e-4796-bf43-7135e9d5a198" containerName="kube-state-metrics" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.533015 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="2691fca5-fe1e-4796-bf43-7135e9d5a198" containerName="kube-state-metrics" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.533823 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.535721 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.536225 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.545513 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.680275 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szz2t\" (UniqueName: \"kubernetes.io/projected/deccddae-c37c-4d93-8591-9de86885520d-kube-api-access-szz2t\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.680551 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.680897 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.680979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.782746 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.782893 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.782923 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.782971 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szz2t\" (UniqueName: \"kubernetes.io/projected/deccddae-c37c-4d93-8591-9de86885520d-kube-api-access-szz2t\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.788857 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.789353 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.790471 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.800370 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szz2t\" (UniqueName: \"kubernetes.io/projected/deccddae-c37c-4d93-8591-9de86885520d-kube-api-access-szz2t\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.856695 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.182455 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.183114 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="sg-core" containerID="cri-o://94c1a4df24e57801e6f811a20fbda55d2b2aa44f90464614f709fcc1c7771571" gracePeriod=30 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.183114 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="proxy-httpd" containerID="cri-o://9e74ba55685ef91dc5c5fd4f75d0c04e6a02240db3ef22d23b01c38947545bf7" gracePeriod=30 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.183198 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-notification-agent" containerID="cri-o://c4722e08cd543a7198136070e2b6ad5db84511db8bbbbb4f4cc49e9edd0c3d33" gracePeriod=30 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.185957 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-central-agent" containerID="cri-o://cbbd1ae9f5180a48bfb6b0e06422201465dab2f80d3bcb0bb07d69614c78274c" gracePeriod=30 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.320701 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.493313 5008 generic.go:334] "Generic (PLEG): container finished" podID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerID="9e74ba55685ef91dc5c5fd4f75d0c04e6a02240db3ef22d23b01c38947545bf7" exitCode=0 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.493579 5008 generic.go:334] "Generic (PLEG): container finished" podID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerID="94c1a4df24e57801e6f811a20fbda55d2b2aa44f90464614f709fcc1c7771571" exitCode=2 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.493381 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerDied","Data":"9e74ba55685ef91dc5c5fd4f75d0c04e6a02240db3ef22d23b01c38947545bf7"} Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.493643 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerDied","Data":"94c1a4df24e57801e6f811a20fbda55d2b2aa44f90464614f709fcc1c7771571"} Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.496834 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"deccddae-c37c-4d93-8591-9de86885520d","Type":"ContainerStarted","Data":"9efb8b918abccd3b69c6ca6aa126d244e44c1f496ecaa07923b76f93590d77c9"} Jan 29 16:19:09 crc kubenswrapper[5008]: E0129 16:19:09.325166 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.341162 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2691fca5-fe1e-4796-bf43-7135e9d5a198" path="/var/lib/kubelet/pods/2691fca5-fe1e-4796-bf43-7135e9d5a198/volumes" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.509444 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"deccddae-c37c-4d93-8591-9de86885520d","Type":"ContainerStarted","Data":"d8ee7814d4a4eda01787615126315da22cae7f8ac0db50c0a81034b82f401057"} Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.509674 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.512835 5008 generic.go:334] "Generic (PLEG): container finished" podID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerID="c4722e08cd543a7198136070e2b6ad5db84511db8bbbbb4f4cc49e9edd0c3d33" exitCode=0 Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.512872 5008 generic.go:334] "Generic (PLEG): container finished" podID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerID="cbbd1ae9f5180a48bfb6b0e06422201465dab2f80d3bcb0bb07d69614c78274c" exitCode=0 Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.512900 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerDied","Data":"c4722e08cd543a7198136070e2b6ad5db84511db8bbbbb4f4cc49e9edd0c3d33"} Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.512931 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerDied","Data":"cbbd1ae9f5180a48bfb6b0e06422201465dab2f80d3bcb0bb07d69614c78274c"} Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.540888 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.778521366 podStartE2EDuration="2.540844172s" podCreationTimestamp="2026-01-29 16:19:07 +0000 UTC" firstStartedPulling="2026-01-29 16:19:08.331446138 +0000 UTC m=+3092.004300375" lastFinishedPulling="2026-01-29 16:19:09.093768944 +0000 UTC m=+3092.766623181" observedRunningTime="2026-01-29 16:19:09.526650057 +0000 UTC m=+3093.199504314" watchObservedRunningTime="2026-01-29 16:19:09.540844172 +0000 UTC m=+3093.213698439" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.617896 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723076 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zk8n\" (UniqueName: \"kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723213 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723252 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723314 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723375 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723420 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723489 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723843 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.724253 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.724423 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.724441 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.735627 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n" (OuterVolumeSpecName: "kube-api-access-4zk8n") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "kube-api-access-4zk8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.738439 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts" (OuterVolumeSpecName: "scripts") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.776706 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.826988 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zk8n\" (UniqueName: \"kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.827021 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.827033 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.858052 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.875306 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data" (OuterVolumeSpecName: "config-data") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.928431 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.928460 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.527262 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerDied","Data":"c0e05b5105ed0e3757d467eff34631c34dcca13e2acddb3cd6556349dd4ddb10"} Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.527330 5008 scope.go:117] "RemoveContainer" containerID="9e74ba55685ef91dc5c5fd4f75d0c04e6a02240db3ef22d23b01c38947545bf7" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.527359 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.551127 5008 scope.go:117] "RemoveContainer" containerID="94c1a4df24e57801e6f811a20fbda55d2b2aa44f90464614f709fcc1c7771571" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.573807 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.577676 5008 scope.go:117] "RemoveContainer" containerID="c4722e08cd543a7198136070e2b6ad5db84511db8bbbbb4f4cc49e9edd0c3d33" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.584540 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.599330 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:10 crc kubenswrapper[5008]: E0129 16:19:10.601802 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-central-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.601924 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-central-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: E0129 16:19:10.602014 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-notification-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.602086 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-notification-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: E0129 16:19:10.602350 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="proxy-httpd" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.602428 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="proxy-httpd" Jan 29 16:19:10 crc kubenswrapper[5008]: E0129 16:19:10.602561 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="sg-core" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.602649 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="sg-core" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.603103 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="sg-core" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.603208 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-central-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.603332 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-notification-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.603550 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="proxy-httpd" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.606079 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.610906 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.613390 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.613583 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.617224 5008 scope.go:117] "RemoveContainer" containerID="cbbd1ae9f5180a48bfb6b0e06422201465dab2f80d3bcb0bb07d69614c78274c" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.617393 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.641513 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndgh\" (UniqueName: \"kubernetes.io/projected/555cfdd3-d86d-45e5-97d5-6f27537a4689-kube-api-access-qndgh\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.641850 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.641950 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.642050 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.642118 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-run-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.642305 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-log-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.642467 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-scripts\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.642541 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-config-data\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.697079 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.698493 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.743868 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-scripts\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744085 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-config-data\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744163 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qndgh\" (UniqueName: \"kubernetes.io/projected/555cfdd3-d86d-45e5-97d5-6f27537a4689-kube-api-access-qndgh\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744309 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744400 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744463 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744530 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-run-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744596 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-log-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.745022 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-run-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.745120 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-log-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.749011 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-scripts\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.750562 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.752341 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.754653 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-config-data\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.762208 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.762395 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.764919 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qndgh\" (UniqueName: \"kubernetes.io/projected/555cfdd3-d86d-45e5-97d5-6f27537a4689-kube-api-access-qndgh\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.935992 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:19:11 crc kubenswrapper[5008]: I0129 16:19:11.335281 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" path="/var/lib/kubelet/pods/d40740f9-e8d8-4f46-b8b0-d913a6c33210/volumes" Jan 29 16:19:11 crc kubenswrapper[5008]: W0129 16:19:11.404526 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555cfdd3_d86d_45e5_97d5_6f27537a4689.slice/crio-314bcb8a04f50cb7e6dfb3e1789b3a53ab0ebdc035310f1078fece85bc42eabb WatchSource:0}: Error finding container 314bcb8a04f50cb7e6dfb3e1789b3a53ab0ebdc035310f1078fece85bc42eabb: Status 404 returned error can't find the container with id 314bcb8a04f50cb7e6dfb3e1789b3a53ab0ebdc035310f1078fece85bc42eabb Jan 29 16:19:11 crc kubenswrapper[5008]: I0129 16:19:11.409308 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:11 crc kubenswrapper[5008]: I0129 16:19:11.538031 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555cfdd3-d86d-45e5-97d5-6f27537a4689","Type":"ContainerStarted","Data":"314bcb8a04f50cb7e6dfb3e1789b3a53ab0ebdc035310f1078fece85bc42eabb"} Jan 29 16:19:11 crc kubenswrapper[5008]: I0129 16:19:11.591245 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:11 crc kubenswrapper[5008]: I0129 16:19:11.653752 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:12 crc kubenswrapper[5008]: I0129 16:19:12.547393 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555cfdd3-d86d-45e5-97d5-6f27537a4689","Type":"ContainerStarted","Data":"6be43b31ac910ae4ec1f4dba9656fa5c8c8e4239b7c47021892fcb2549bb6e77"} Jan 29 16:19:13 crc kubenswrapper[5008]: I0129 16:19:13.560846 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555cfdd3-d86d-45e5-97d5-6f27537a4689","Type":"ContainerStarted","Data":"ef86ed1f511fa47159eefd44a4cb01541b31a4e53f09dfa6ad6a93886f3b3e3f"} Jan 29 16:19:13 crc kubenswrapper[5008]: I0129 16:19:13.561007 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4knpg" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="registry-server" containerID="cri-o://df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9" gracePeriod=2 Jan 29 16:19:13 crc kubenswrapper[5008]: I0129 16:19:13.948586 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:13 crc kubenswrapper[5008]: I0129 16:19:13.990707 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:19:13 crc kubenswrapper[5008]: I0129 16:19:13.990753 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.009848 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content\") pod \"5409ba7c-5123-492a-a8d6-230022150d55\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.010063 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbn7k\" (UniqueName: \"kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k\") pod \"5409ba7c-5123-492a-a8d6-230022150d55\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.010089 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities\") pod \"5409ba7c-5123-492a-a8d6-230022150d55\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.011290 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities" (OuterVolumeSpecName: "utilities") pod "5409ba7c-5123-492a-a8d6-230022150d55" (UID: "5409ba7c-5123-492a-a8d6-230022150d55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.017318 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k" (OuterVolumeSpecName: "kube-api-access-xbn7k") pod "5409ba7c-5123-492a-a8d6-230022150d55" (UID: "5409ba7c-5123-492a-a8d6-230022150d55"). InnerVolumeSpecName "kube-api-access-xbn7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.074509 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5409ba7c-5123-492a-a8d6-230022150d55" (UID: "5409ba7c-5123-492a-a8d6-230022150d55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.112829 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbn7k\" (UniqueName: \"kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.112911 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.112928 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.571190 5008 generic.go:334] "Generic (PLEG): container finished" podID="5409ba7c-5123-492a-a8d6-230022150d55" containerID="df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9" exitCode=0 Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.571267 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.571276 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerDied","Data":"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9"} Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.571774 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerDied","Data":"604777d2fef66e9fcb2db8ebbf9e755a893c019f9d051a093e450298bdc86dfa"} Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.571845 5008 scope.go:117] "RemoveContainer" containerID="df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.576796 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555cfdd3-d86d-45e5-97d5-6f27537a4689","Type":"ContainerStarted","Data":"24d093ac26006df6aeca5f3301dd74d900a848c05db60e157234b31aa6e5e9b9"} Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.599309 5008 scope.go:117] "RemoveContainer" containerID="f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.610718 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.621115 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.623962 5008 scope.go:117] "RemoveContainer" containerID="2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.663653 5008 scope.go:117] "RemoveContainer" containerID="df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9" Jan 29 16:19:14 crc kubenswrapper[5008]: E0129 16:19:14.664065 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9\": container with ID starting with df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9 not found: ID does not exist" containerID="df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.664127 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9"} err="failed to get container status \"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9\": rpc error: code = NotFound desc = could not find container \"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9\": container with ID starting with df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9 not found: ID does not exist" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.664158 5008 scope.go:117] "RemoveContainer" containerID="f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f" Jan 29 16:19:14 crc kubenswrapper[5008]: E0129 16:19:14.664536 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f\": container with ID starting with f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f not found: ID does not exist" containerID="f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.664565 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f"} err="failed to get container status \"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f\": rpc error: code = NotFound desc = could not find container \"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f\": container with ID starting with f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f not found: ID does not exist" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.664585 5008 scope.go:117] "RemoveContainer" containerID="2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e" Jan 29 16:19:14 crc kubenswrapper[5008]: E0129 16:19:14.664899 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e\": container with ID starting with 2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e not found: ID does not exist" containerID="2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.664934 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e"} err="failed to get container status \"2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e\": rpc error: code = NotFound desc = could not find container \"2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e\": container with ID starting with 2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e not found: ID does not exist" Jan 29 16:19:15 crc kubenswrapper[5008]: I0129 16:19:15.335622 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5409ba7c-5123-492a-a8d6-230022150d55" path="/var/lib/kubelet/pods/5409ba7c-5123-492a-a8d6-230022150d55/volumes" Jan 29 16:19:16 crc kubenswrapper[5008]: I0129 16:19:16.599107 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555cfdd3-d86d-45e5-97d5-6f27537a4689","Type":"ContainerStarted","Data":"898801d18f7d47890091d3b9387543becdf9583287ef369259c6bb440c0ba97e"} Jan 29 16:19:16 crc kubenswrapper[5008]: I0129 16:19:16.599484 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:19:16 crc kubenswrapper[5008]: I0129 16:19:16.627089 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.498151768 podStartE2EDuration="6.627071779s" podCreationTimestamp="2026-01-29 16:19:10 +0000 UTC" firstStartedPulling="2026-01-29 16:19:11.406906434 +0000 UTC m=+3095.079760671" lastFinishedPulling="2026-01-29 16:19:15.535826445 +0000 UTC m=+3099.208680682" observedRunningTime="2026-01-29 16:19:16.618645784 +0000 UTC m=+3100.291500031" watchObservedRunningTime="2026-01-29 16:19:16.627071779 +0000 UTC m=+3100.299926026" Jan 29 16:19:17 crc kubenswrapper[5008]: I0129 16:19:17.214558 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-fbjsd_346fd378-8582-44af-8332-dad183bddf6e/cert-manager-controller/0.log" Jan 29 16:19:17 crc kubenswrapper[5008]: I0129 16:19:17.392769 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-dvjtx_1217edcf-8ec1-4354-8fbe-a9325b564932/cert-manager-cainjector/0.log" Jan 29 16:19:17 crc kubenswrapper[5008]: I0129 16:19:17.493252 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wvlhn_6111be19-5e01-42e4-b4cf-3728e3ee4a6f/cert-manager-webhook/0.log" Jan 29 16:19:17 crc kubenswrapper[5008]: I0129 16:19:17.866212 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 16:19:18 crc kubenswrapper[5008]: I0129 16:19:18.622086 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerStarted","Data":"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042"} Jan 29 16:19:19 crc kubenswrapper[5008]: I0129 16:19:19.631035 5008 generic.go:334] "Generic (PLEG): container finished" podID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerID="e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042" exitCode=0 Jan 29 16:19:19 crc kubenswrapper[5008]: I0129 16:19:19.631075 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerDied","Data":"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042"} Jan 29 16:19:20 crc kubenswrapper[5008]: I0129 16:19:20.643482 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerStarted","Data":"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89"} Jan 29 16:19:20 crc kubenswrapper[5008]: I0129 16:19:20.667937 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fl9wc" podStartSLOduration=3.386235129 podStartE2EDuration="11m5.667919181s" podCreationTimestamp="2026-01-29 16:08:15 +0000 UTC" firstStartedPulling="2026-01-29 16:08:17.727404837 +0000 UTC m=+2441.400259074" lastFinishedPulling="2026-01-29 16:19:20.009088879 +0000 UTC m=+3103.681943126" observedRunningTime="2026-01-29 16:19:20.659469186 +0000 UTC m=+3104.332323433" watchObservedRunningTime="2026-01-29 16:19:20.667919181 +0000 UTC m=+3104.340773418" Jan 29 16:19:24 crc kubenswrapper[5008]: E0129 16:19:24.326170 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:19:25 crc kubenswrapper[5008]: I0129 16:19:25.914357 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:25 crc kubenswrapper[5008]: I0129 16:19:25.914729 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:25 crc kubenswrapper[5008]: I0129 16:19:25.966471 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:26 crc kubenswrapper[5008]: I0129 16:19:26.738794 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:26 crc kubenswrapper[5008]: I0129 16:19:26.784677 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:19:28 crc kubenswrapper[5008]: I0129 16:19:28.704994 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="registry-server" containerID="cri-o://0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89" gracePeriod=2 Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.215087 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.299372 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities\") pod \"66b503d3-cf12-4a89-90ca-27d7f941ed63\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.299549 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8j5q\" (UniqueName: \"kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q\") pod \"66b503d3-cf12-4a89-90ca-27d7f941ed63\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.299635 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content\") pod \"66b503d3-cf12-4a89-90ca-27d7f941ed63\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.301207 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities" (OuterVolumeSpecName: "utilities") pod "66b503d3-cf12-4a89-90ca-27d7f941ed63" (UID: "66b503d3-cf12-4a89-90ca-27d7f941ed63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.326031 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q" (OuterVolumeSpecName: "kube-api-access-l8j5q") pod "66b503d3-cf12-4a89-90ca-27d7f941ed63" (UID: "66b503d3-cf12-4a89-90ca-27d7f941ed63"). InnerVolumeSpecName "kube-api-access-l8j5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.351072 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66b503d3-cf12-4a89-90ca-27d7f941ed63" (UID: "66b503d3-cf12-4a89-90ca-27d7f941ed63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.402235 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8j5q\" (UniqueName: \"kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.402294 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.402391 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.721458 5008 generic.go:334] "Generic (PLEG): container finished" podID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerID="0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89" exitCode=0 Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.721529 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerDied","Data":"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89"} Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.721594 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerDied","Data":"5b1b00bb2ae97cde561959176674c8591e6b4a491353c5009f561f79b72ee787"} Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.721612 5008 scope.go:117] "RemoveContainer" containerID="0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.721540 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.744607 5008 scope.go:117] "RemoveContainer" containerID="e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.775091 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.790035 5008 scope.go:117] "RemoveContainer" containerID="048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.807016 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.831618 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832137 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="extract-content" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832162 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="extract-content" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832190 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832200 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832222 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832231 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832241 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="extract-content" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832248 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="extract-content" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832271 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="extract-utilities" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832279 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="extract-utilities" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832295 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="extract-utilities" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832302 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="extract-utilities" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832506 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832520 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.842474 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.877010 5008 scope.go:117] "RemoveContainer" containerID="0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.877426 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89\": container with ID starting with 0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89 not found: ID does not exist" containerID="0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.877474 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89"} err="failed to get container status \"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89\": rpc error: code = NotFound desc = could not find container \"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89\": container with ID starting with 0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89 not found: ID does not exist" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.877502 5008 scope.go:117] "RemoveContainer" containerID="e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.877766 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042\": container with ID starting with e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042 not found: ID does not exist" containerID="e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.877806 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042"} err="failed to get container status \"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042\": rpc error: code = NotFound desc = could not find container \"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042\": container with ID starting with e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042 not found: ID does not exist" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.877822 5008 scope.go:117] "RemoveContainer" containerID="048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.878117 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4\": container with ID starting with 048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4 not found: ID does not exist" containerID="048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.878220 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4"} err="failed to get container status \"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4\": rpc error: code = NotFound desc = could not find container \"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4\": container with ID starting with 048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4 not found: ID does not exist" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.878966 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.916696 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.916858 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.916904 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbb47\" (UniqueName: \"kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.018940 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.019066 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.019107 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbb47\" (UniqueName: \"kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.021010 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.021331 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.046337 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbb47\" (UniqueName: \"kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.178448 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.205886 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-dvn47_75f20405-b349-4e5f-ba1a-b6bf348766ce/nmstate-console-plugin/0.log" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.457383 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8hxxx_beee9730-825d-4a7e-9ef1-d735b1bddd07/nmstate-handler/0.log" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.566098 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mtz4q_5379965a-18ce-41a4-8753-7a70ed4a5efc/kube-rbac-proxy/0.log" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.626934 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mtz4q_5379965a-18ce-41a4-8753-7a70ed4a5efc/nmstate-metrics/0.log" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.693617 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:30 crc kubenswrapper[5008]: W0129 16:19:30.713300 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod791ec4b8_9faf_4411_86e0_1cdbba387a54.slice/crio-ef95452c63f936703e4e92dfd38c4937746f4249a5c6a8d24f349553df025930 WatchSource:0}: Error finding container ef95452c63f936703e4e92dfd38c4937746f4249a5c6a8d24f349553df025930: Status 404 returned error can't find the container with id ef95452c63f936703e4e92dfd38c4937746f4249a5c6a8d24f349553df025930 Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.731210 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerStarted","Data":"ef95452c63f936703e4e92dfd38c4937746f4249a5c6a8d24f349553df025930"} Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.836907 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dkpn2_5fab4312-8998-4667-af25-ba459fcb4a68/nmstate-operator/0.log" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.863656 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-qz5xs_6a7e5f12-26c5-4197-81ed-559569651fab/nmstate-webhook/0.log" Jan 29 16:19:31 crc kubenswrapper[5008]: I0129 16:19:31.333483 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" path="/var/lib/kubelet/pods/66b503d3-cf12-4a89-90ca-27d7f941ed63/volumes" Jan 29 16:19:31 crc kubenswrapper[5008]: I0129 16:19:31.739846 5008 generic.go:334] "Generic (PLEG): container finished" podID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerID="b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f" exitCode=0 Jan 29 16:19:31 crc kubenswrapper[5008]: I0129 16:19:31.740082 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerDied","Data":"b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f"} Jan 29 16:19:32 crc kubenswrapper[5008]: I0129 16:19:32.750491 5008 generic.go:334] "Generic (PLEG): container finished" podID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerID="8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579" exitCode=0 Jan 29 16:19:32 crc kubenswrapper[5008]: I0129 16:19:32.750703 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerDied","Data":"8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579"} Jan 29 16:19:33 crc kubenswrapper[5008]: I0129 16:19:33.770226 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerStarted","Data":"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd"} Jan 29 16:19:33 crc kubenswrapper[5008]: I0129 16:19:33.796329 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-thkns" podStartSLOduration=3.351457325 podStartE2EDuration="4.796309897s" podCreationTimestamp="2026-01-29 16:19:29 +0000 UTC" firstStartedPulling="2026-01-29 16:19:31.74229197 +0000 UTC m=+3115.415146207" lastFinishedPulling="2026-01-29 16:19:33.187144532 +0000 UTC m=+3116.859998779" observedRunningTime="2026-01-29 16:19:33.791589682 +0000 UTC m=+3117.464443919" watchObservedRunningTime="2026-01-29 16:19:33.796309897 +0000 UTC m=+3117.469164134" Jan 29 16:19:38 crc kubenswrapper[5008]: E0129 16:19:38.325388 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.179591 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.180012 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.238160 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.872615 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.929747 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.947565 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 16:19:42 crc kubenswrapper[5008]: I0129 16:19:42.862380 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-thkns" podUID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerName="registry-server" containerID="cri-o://77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd" gracePeriod=2 Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.351163 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.382602 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities\") pod \"791ec4b8-9faf-4411-86e0-1cdbba387a54\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.382742 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbb47\" (UniqueName: \"kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47\") pod \"791ec4b8-9faf-4411-86e0-1cdbba387a54\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.382766 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content\") pod \"791ec4b8-9faf-4411-86e0-1cdbba387a54\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.386640 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities" (OuterVolumeSpecName: "utilities") pod "791ec4b8-9faf-4411-86e0-1cdbba387a54" (UID: "791ec4b8-9faf-4411-86e0-1cdbba387a54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.392813 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47" (OuterVolumeSpecName: "kube-api-access-lbb47") pod "791ec4b8-9faf-4411-86e0-1cdbba387a54" (UID: "791ec4b8-9faf-4411-86e0-1cdbba387a54"). InnerVolumeSpecName "kube-api-access-lbb47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.404095 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "791ec4b8-9faf-4411-86e0-1cdbba387a54" (UID: "791ec4b8-9faf-4411-86e0-1cdbba387a54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.484052 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbb47\" (UniqueName: \"kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.484079 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.484088 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.871714 5008 generic.go:334] "Generic (PLEG): container finished" podID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerID="77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd" exitCode=0 Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.871765 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerDied","Data":"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd"} Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.871817 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerDied","Data":"ef95452c63f936703e4e92dfd38c4937746f4249a5c6a8d24f349553df025930"} Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.871841 5008 scope.go:117] "RemoveContainer" containerID="77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.871852 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.899009 5008 scope.go:117] "RemoveContainer" containerID="8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.910339 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.917849 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.921864 5008 scope.go:117] "RemoveContainer" containerID="b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.963459 5008 scope.go:117] "RemoveContainer" containerID="77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd" Jan 29 16:19:43 crc kubenswrapper[5008]: E0129 16:19:43.964059 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd\": container with ID starting with 77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd not found: ID does not exist" containerID="77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.964105 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd"} err="failed to get container status \"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd\": rpc error: code = NotFound desc = could not find container \"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd\": container with ID starting with 77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd not found: ID does not exist" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.964131 5008 scope.go:117] "RemoveContainer" containerID="8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579" Jan 29 16:19:43 crc kubenswrapper[5008]: E0129 16:19:43.964450 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579\": container with ID starting with 8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579 not found: ID does not exist" containerID="8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.964481 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579"} err="failed to get container status \"8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579\": rpc error: code = NotFound desc = could not find container \"8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579\": container with ID starting with 8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579 not found: ID does not exist" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.964523 5008 scope.go:117] "RemoveContainer" containerID="b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f" Jan 29 16:19:43 crc kubenswrapper[5008]: E0129 16:19:43.964853 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f\": container with ID starting with b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f not found: ID does not exist" containerID="b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.964879 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f"} err="failed to get container status \"b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f\": rpc error: code = NotFound desc = could not find container \"b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f\": container with ID starting with b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f not found: ID does not exist" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.990358 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.990409 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:19:45 crc kubenswrapper[5008]: I0129 16:19:45.336465 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="791ec4b8-9faf-4411-86e0-1cdbba387a54" path="/var/lib/kubelet/pods/791ec4b8-9faf-4411-86e0-1cdbba387a54/volumes" Jan 29 16:19:52 crc kubenswrapper[5008]: I0129 16:19:52.956544 5008 generic.go:334] "Generic (PLEG): container finished" podID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerID="da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232" exitCode=0 Jan 29 16:19:52 crc kubenswrapper[5008]: I0129 16:19:52.956734 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerDied","Data":"da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232"} Jan 29 16:19:53 crc kubenswrapper[5008]: I0129 16:19:53.966588 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerStarted","Data":"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845"} Jan 29 16:19:53 crc kubenswrapper[5008]: I0129 16:19:53.994893 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7dqqz" podStartSLOduration=2.586754501 podStartE2EDuration="5m38.994863878s" podCreationTimestamp="2026-01-29 16:14:15 +0000 UTC" firstStartedPulling="2026-01-29 16:14:16.93862721 +0000 UTC m=+2800.611481457" lastFinishedPulling="2026-01-29 16:19:53.346736597 +0000 UTC m=+3137.019590834" observedRunningTime="2026-01-29 16:19:53.985509271 +0000 UTC m=+3137.658363508" watchObservedRunningTime="2026-01-29 16:19:53.994863878 +0000 UTC m=+3137.667718135" Jan 29 16:19:55 crc kubenswrapper[5008]: I0129 16:19:55.963401 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:19:55 crc kubenswrapper[5008]: I0129 16:19:55.963813 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.122942 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bzslg_88b3b62b-8ee9-4541-a109-c52f195f55c2/kube-rbac-proxy/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.234743 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bzslg_88b3b62b-8ee9-4541-a109-c52f195f55c2/controller/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.337276 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-frr-files/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.536859 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-frr-files/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.560386 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-metrics/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.599077 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-reloader/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.666863 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-reloader/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.820651 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-frr-files/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.825949 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-reloader/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.872190 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-metrics/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.943051 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-metrics/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.020166 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerName="registry-server" probeResult="failure" output=< Jan 29 16:19:57 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 16:19:57 crc kubenswrapper[5008]: > Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.115497 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-frr-files/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.136512 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-reloader/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.136512 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-metrics/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.148676 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/controller/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.334202 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/frr-metrics/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.347119 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/kube-rbac-proxy-frr/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.430186 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/kube-rbac-proxy/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.621091 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/reloader/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.683740 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-4l5h6_fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07/frr-k8s-webhook-server/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.939832 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8644cb7465-xww64_65797f8d-98da-4cbc-a7df-cd6d00fda635/manager/0.log" Jan 29 16:19:58 crc kubenswrapper[5008]: I0129 16:19:58.052259 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b97546cb-r5lk9_42235713-405f-4dc1-9e60-3b1615ec49a2/webhook-server/0.log" Jan 29 16:19:58 crc kubenswrapper[5008]: I0129 16:19:58.214399 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dmtw7_8927915f-8333-415c-82e1-47d948a6e8ad/kube-rbac-proxy/0.log" Jan 29 16:19:58 crc kubenswrapper[5008]: I0129 16:19:58.738735 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dmtw7_8927915f-8333-415c-82e1-47d948a6e8ad/speaker/0.log" Jan 29 16:19:58 crc kubenswrapper[5008]: I0129 16:19:58.794941 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/frr/0.log" Jan 29 16:20:06 crc kubenswrapper[5008]: I0129 16:20:06.027662 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:20:06 crc kubenswrapper[5008]: I0129 16:20:06.094822 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:20:06 crc kubenswrapper[5008]: I0129 16:20:06.266426 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.075544 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerName="registry-server" containerID="cri-o://99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845" gracePeriod=2 Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.519861 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.545671 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content\") pod \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.545776 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities\") pod \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.545841 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl7kv\" (UniqueName: \"kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv\") pod \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.552018 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities" (OuterVolumeSpecName: "utilities") pod "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" (UID: "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.553976 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv" (OuterVolumeSpecName: "kube-api-access-bl7kv") pod "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" (UID: "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55"). InnerVolumeSpecName "kube-api-access-bl7kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.649247 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" (UID: "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.649945 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.649964 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.649973 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl7kv\" (UniqueName: \"kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv\") on node \"crc\" DevicePath \"\"" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.085458 5008 generic.go:334] "Generic (PLEG): container finished" podID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerID="99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845" exitCode=0 Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.085510 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.085524 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerDied","Data":"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845"} Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.085597 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerDied","Data":"4bc8d674639c663e12f180fa6c89b4e70c92f8b3fda66ccac4d3e879acdf15cc"} Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.085637 5008 scope.go:117] "RemoveContainer" containerID="99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.107726 5008 scope.go:117] "RemoveContainer" containerID="da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.124587 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.136390 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.137641 5008 scope.go:117] "RemoveContainer" containerID="5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.189720 5008 scope.go:117] "RemoveContainer" containerID="99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845" Jan 29 16:20:08 crc kubenswrapper[5008]: E0129 16:20:08.190600 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845\": container with ID starting with 99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845 not found: ID does not exist" containerID="99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.190665 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845"} err="failed to get container status \"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845\": rpc error: code = NotFound desc = could not find container \"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845\": container with ID starting with 99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845 not found: ID does not exist" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.190700 5008 scope.go:117] "RemoveContainer" containerID="da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232" Jan 29 16:20:08 crc kubenswrapper[5008]: E0129 16:20:08.191069 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232\": container with ID starting with da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232 not found: ID does not exist" containerID="da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.191103 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232"} err="failed to get container status \"da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232\": rpc error: code = NotFound desc = could not find container \"da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232\": container with ID starting with da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232 not found: ID does not exist" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.191120 5008 scope.go:117] "RemoveContainer" containerID="5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6" Jan 29 16:20:08 crc kubenswrapper[5008]: E0129 16:20:08.191503 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6\": container with ID starting with 5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6 not found: ID does not exist" containerID="5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.191529 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6"} err="failed to get container status \"5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6\": rpc error: code = NotFound desc = could not find container \"5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6\": container with ID starting with 5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6 not found: ID does not exist" Jan 29 16:20:09 crc kubenswrapper[5008]: I0129 16:20:09.335038 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" path="/var/lib/kubelet/pods/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55/volumes" Jan 29 16:20:10 crc kubenswrapper[5008]: I0129 16:20:10.679649 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/util/0.log" Jan 29 16:20:10 crc kubenswrapper[5008]: I0129 16:20:10.803873 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/util/0.log" Jan 29 16:20:10 crc kubenswrapper[5008]: I0129 16:20:10.839507 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/pull/0.log" Jan 29 16:20:10 crc kubenswrapper[5008]: I0129 16:20:10.840181 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/pull/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.047940 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/util/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.060772 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/pull/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.090555 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/extract/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.225084 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/util/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.374126 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/pull/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.378075 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/util/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.396503 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/pull/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.515336 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/util/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.542217 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/extract/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.544811 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/pull/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.687823 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-utilities/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.839582 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-content/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.846106 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-utilities/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.858111 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-content/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.001796 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-utilities/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.049057 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-content/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.252173 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-utilities/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.416815 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/registry-server/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.425458 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-utilities/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.506834 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-content/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.511490 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-content/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.665852 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-content/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.671347 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-utilities/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.872743 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-pz9kz_077a9343-695d-4180-9255-41f1eaeb58a3/marketplace-operator/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.957752 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.191571 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.191615 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.248235 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.262790 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/registry-server/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.384843 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.387293 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.529275 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/registry-server/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.596684 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.711151 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.734362 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.735709 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.886984 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.937421 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.990842 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.990909 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.990958 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.991806 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.991886 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" gracePeriod=600 Jan 29 16:20:14 crc kubenswrapper[5008]: I0129 16:20:14.680842 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/registry-server/0.log" Jan 29 16:20:14 crc kubenswrapper[5008]: E0129 16:20:14.685176 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:20:15 crc kubenswrapper[5008]: I0129 16:20:15.145976 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" exitCode=0 Jan 29 16:20:15 crc kubenswrapper[5008]: I0129 16:20:15.146274 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec"} Jan 29 16:20:15 crc kubenswrapper[5008]: I0129 16:20:15.146315 5008 scope.go:117] "RemoveContainer" containerID="b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957" Jan 29 16:20:15 crc kubenswrapper[5008]: I0129 16:20:15.147001 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:20:15 crc kubenswrapper[5008]: E0129 16:20:15.147340 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:20:28 crc kubenswrapper[5008]: I0129 16:20:28.324016 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:20:28 crc kubenswrapper[5008]: E0129 16:20:28.324770 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:20:37 crc kubenswrapper[5008]: E0129 16:20:37.822595 5008 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.50:52936->38.102.83.50:37791: write tcp 38.102.83.50:52936->38.102.83.50:37791: write: broken pipe Jan 29 16:20:43 crc kubenswrapper[5008]: I0129 16:20:43.324412 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:20:43 crc kubenswrapper[5008]: E0129 16:20:43.325343 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:20:44 crc kubenswrapper[5008]: I0129 16:20:44.114015 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" podUID="64c08f63-12a2-4dfb-b96d-0a12e9725021" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 29 16:20:55 crc kubenswrapper[5008]: I0129 16:20:55.331885 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:20:55 crc kubenswrapper[5008]: E0129 16:20:55.333616 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:21:07 crc kubenswrapper[5008]: I0129 16:21:07.331986 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:21:07 crc kubenswrapper[5008]: E0129 16:21:07.332881 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:21:20 crc kubenswrapper[5008]: I0129 16:21:20.324291 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:21:20 crc kubenswrapper[5008]: E0129 16:21:20.325377 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:21:35 crc kubenswrapper[5008]: I0129 16:21:35.324315 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:21:35 crc kubenswrapper[5008]: E0129 16:21:35.325623 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:21:50 crc kubenswrapper[5008]: I0129 16:21:50.324531 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:21:50 crc kubenswrapper[5008]: E0129 16:21:50.327132 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:21:53 crc kubenswrapper[5008]: I0129 16:21:53.077208 5008 generic.go:334] "Generic (PLEG): container finished" podID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerID="3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe" exitCode=0 Jan 29 16:21:53 crc kubenswrapper[5008]: I0129 16:21:53.077305 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" event={"ID":"d320dd2e-14dc-4c54-86bf-25b5abd30dae","Type":"ContainerDied","Data":"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe"} Jan 29 16:21:53 crc kubenswrapper[5008]: I0129 16:21:53.078013 5008 scope.go:117] "RemoveContainer" containerID="3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe" Jan 29 16:21:53 crc kubenswrapper[5008]: I0129 16:21:53.980439 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nvrh2_must-gather-f7qvt_d320dd2e-14dc-4c54-86bf-25b5abd30dae/gather/0.log" Jan 29 16:22:02 crc kubenswrapper[5008]: I0129 16:22:02.614263 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nvrh2/must-gather-f7qvt"] Jan 29 16:22:02 crc kubenswrapper[5008]: I0129 16:22:02.615138 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" podUID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerName="copy" containerID="cri-o://ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce" gracePeriod=2 Jan 29 16:22:02 crc kubenswrapper[5008]: I0129 16:22:02.622952 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nvrh2/must-gather-f7qvt"] Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.092796 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nvrh2_must-gather-f7qvt_d320dd2e-14dc-4c54-86bf-25b5abd30dae/copy/0.log" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.093645 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.174935 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nvrh2_must-gather-f7qvt_d320dd2e-14dc-4c54-86bf-25b5abd30dae/copy/0.log" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.175284 5008 generic.go:334] "Generic (PLEG): container finished" podID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerID="ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce" exitCode=143 Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.175334 5008 scope.go:117] "RemoveContainer" containerID="ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.175398 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.196844 5008 scope.go:117] "RemoveContainer" containerID="3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.211154 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output\") pod \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.211221 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5tbc\" (UniqueName: \"kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc\") pod \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.217386 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc" (OuterVolumeSpecName: "kube-api-access-p5tbc") pod "d320dd2e-14dc-4c54-86bf-25b5abd30dae" (UID: "d320dd2e-14dc-4c54-86bf-25b5abd30dae"). InnerVolumeSpecName "kube-api-access-p5tbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.268883 5008 scope.go:117] "RemoveContainer" containerID="ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce" Jan 29 16:22:03 crc kubenswrapper[5008]: E0129 16:22:03.269449 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce\": container with ID starting with ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce not found: ID does not exist" containerID="ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.269493 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce"} err="failed to get container status \"ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce\": rpc error: code = NotFound desc = could not find container \"ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce\": container with ID starting with ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce not found: ID does not exist" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.269527 5008 scope.go:117] "RemoveContainer" containerID="3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe" Jan 29 16:22:03 crc kubenswrapper[5008]: E0129 16:22:03.270730 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe\": container with ID starting with 3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe not found: ID does not exist" containerID="3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.270854 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe"} err="failed to get container status \"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe\": rpc error: code = NotFound desc = could not find container \"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe\": container with ID starting with 3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe not found: ID does not exist" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.313978 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5tbc\" (UniqueName: \"kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.336075 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d320dd2e-14dc-4c54-86bf-25b5abd30dae" (UID: "d320dd2e-14dc-4c54-86bf-25b5abd30dae"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.416619 5008 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[5008]: I0129 16:22:04.323632 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:22:04 crc kubenswrapper[5008]: E0129 16:22:04.324143 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:22:05 crc kubenswrapper[5008]: I0129 16:22:05.343281 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" path="/var/lib/kubelet/pods/d320dd2e-14dc-4c54-86bf-25b5abd30dae/volumes" Jan 29 16:22:18 crc kubenswrapper[5008]: I0129 16:22:18.323292 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:22:18 crc kubenswrapper[5008]: E0129 16:22:18.323959 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:22:33 crc kubenswrapper[5008]: I0129 16:22:33.323737 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:22:33 crc kubenswrapper[5008]: E0129 16:22:33.324552 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:22:48 crc kubenswrapper[5008]: I0129 16:22:48.323866 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:22:48 crc kubenswrapper[5008]: E0129 16:22:48.324689 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:23:01 crc kubenswrapper[5008]: I0129 16:23:01.323862 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:23:01 crc kubenswrapper[5008]: E0129 16:23:01.324615 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:23:15 crc kubenswrapper[5008]: I0129 16:23:15.323885 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:23:15 crc kubenswrapper[5008]: E0129 16:23:15.324767 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:23:26 crc kubenswrapper[5008]: I0129 16:23:26.324265 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:23:26 crc kubenswrapper[5008]: E0129 16:23:26.325256 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:23:40 crc kubenswrapper[5008]: I0129 16:23:40.323827 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:23:40 crc kubenswrapper[5008]: E0129 16:23:40.324633 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:23:54 crc kubenswrapper[5008]: I0129 16:23:54.324366 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:23:54 crc kubenswrapper[5008]: E0129 16:23:54.325381 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:24:05 crc kubenswrapper[5008]: I0129 16:24:05.324359 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:24:05 crc kubenswrapper[5008]: E0129 16:24:05.326081 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:24:19 crc kubenswrapper[5008]: I0129 16:24:19.324563 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:24:19 crc kubenswrapper[5008]: E0129 16:24:19.325812 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:24:32 crc kubenswrapper[5008]: I0129 16:24:32.323651 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:24:32 crc kubenswrapper[5008]: E0129 16:24:32.324456 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:24:47 crc kubenswrapper[5008]: I0129 16:24:47.332170 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:24:47 crc kubenswrapper[5008]: E0129 16:24:47.333063 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:25:02 crc kubenswrapper[5008]: I0129 16:25:02.323930 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:25:02 crc kubenswrapper[5008]: E0129 16:25:02.324746 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:25:16 crc kubenswrapper[5008]: I0129 16:25:16.324067 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:25:16 crc kubenswrapper[5008]: I0129 16:25:16.535638 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"63bb20d0587cf459109c67ec854d569e2684b97c4d5778d674f7a6db512d3546"} Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.842265 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s2hht"] Jan 29 16:25:18 crc kubenswrapper[5008]: E0129 16:25:18.845598 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerName="extract-utilities" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.845833 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerName="extract-utilities" Jan 29 16:25:18 crc kubenswrapper[5008]: E0129 16:25:18.845999 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerName="extract-content" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.846143 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerName="extract-content" Jan 29 16:25:18 crc kubenswrapper[5008]: E0129 16:25:18.846290 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerName="extract-content" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.846436 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerName="extract-content" Jan 29 16:25:18 crc kubenswrapper[5008]: E0129 16:25:18.846562 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerName="copy" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.846704 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerName="copy" Jan 29 16:25:18 crc kubenswrapper[5008]: E0129 16:25:18.846940 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerName="registry-server" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.847121 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerName="registry-server" Jan 29 16:25:18 crc kubenswrapper[5008]: E0129 16:25:18.847317 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerName="extract-utilities" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.847497 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerName="extract-utilities" Jan 29 16:25:18 crc kubenswrapper[5008]: E0129 16:25:18.847677 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerName="gather" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.847876 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerName="gather" Jan 29 16:25:18 crc kubenswrapper[5008]: E0129 16:25:18.848152 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerName="registry-server" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.848324 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerName="registry-server" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.848953 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerName="registry-server" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.849162 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerName="registry-server" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.849313 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerName="gather" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.849461 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerName="copy" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.852345 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:18 crc kubenswrapper[5008]: I0129 16:25:18.868016 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2hht"] Jan 29 16:25:19 crc kubenswrapper[5008]: I0129 16:25:19.014887 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-catalog-content\") pod \"redhat-operators-s2hht\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:19 crc kubenswrapper[5008]: I0129 16:25:19.015241 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn5kl\" (UniqueName: \"kubernetes.io/projected/2809182f-c20c-4e72-adf5-530c2d40840c-kube-api-access-zn5kl\") pod \"redhat-operators-s2hht\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:19 crc kubenswrapper[5008]: I0129 16:25:19.015313 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-utilities\") pod \"redhat-operators-s2hht\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:19 crc kubenswrapper[5008]: I0129 16:25:19.117445 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-catalog-content\") pod \"redhat-operators-s2hht\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:19 crc kubenswrapper[5008]: I0129 16:25:19.117539 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn5kl\" (UniqueName: \"kubernetes.io/projected/2809182f-c20c-4e72-adf5-530c2d40840c-kube-api-access-zn5kl\") pod \"redhat-operators-s2hht\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:19 crc kubenswrapper[5008]: I0129 16:25:19.117616 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-utilities\") pod \"redhat-operators-s2hht\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:19 crc kubenswrapper[5008]: I0129 16:25:19.118029 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-catalog-content\") pod \"redhat-operators-s2hht\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:19 crc kubenswrapper[5008]: I0129 16:25:19.118340 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-utilities\") pod \"redhat-operators-s2hht\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:19 crc kubenswrapper[5008]: I0129 16:25:19.147160 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn5kl\" (UniqueName: \"kubernetes.io/projected/2809182f-c20c-4e72-adf5-530c2d40840c-kube-api-access-zn5kl\") pod \"redhat-operators-s2hht\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:19 crc kubenswrapper[5008]: I0129 16:25:19.181177 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:19 crc kubenswrapper[5008]: I0129 16:25:19.629580 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2hht"] Jan 29 16:25:20 crc kubenswrapper[5008]: I0129 16:25:20.573668 5008 generic.go:334] "Generic (PLEG): container finished" podID="2809182f-c20c-4e72-adf5-530c2d40840c" containerID="f4a01d458f754455811715313bc20c5064ec7a1be09053c7988158ffee0c21e4" exitCode=0 Jan 29 16:25:20 crc kubenswrapper[5008]: I0129 16:25:20.573765 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2hht" event={"ID":"2809182f-c20c-4e72-adf5-530c2d40840c","Type":"ContainerDied","Data":"f4a01d458f754455811715313bc20c5064ec7a1be09053c7988158ffee0c21e4"} Jan 29 16:25:20 crc kubenswrapper[5008]: I0129 16:25:20.574199 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2hht" event={"ID":"2809182f-c20c-4e72-adf5-530c2d40840c","Type":"ContainerStarted","Data":"687c944f8481c4a547038ab2abd77a9297aefe21d838e620abf54f813a58963d"} Jan 29 16:25:20 crc kubenswrapper[5008]: I0129 16:25:20.575892 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:25:21 crc kubenswrapper[5008]: I0129 16:25:21.583320 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2hht" event={"ID":"2809182f-c20c-4e72-adf5-530c2d40840c","Type":"ContainerStarted","Data":"be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94"} Jan 29 16:25:22 crc kubenswrapper[5008]: I0129 16:25:22.595722 5008 generic.go:334] "Generic (PLEG): container finished" podID="2809182f-c20c-4e72-adf5-530c2d40840c" containerID="be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94" exitCode=0 Jan 29 16:25:22 crc kubenswrapper[5008]: I0129 16:25:22.595836 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2hht" event={"ID":"2809182f-c20c-4e72-adf5-530c2d40840c","Type":"ContainerDied","Data":"be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94"} Jan 29 16:25:23 crc kubenswrapper[5008]: I0129 16:25:23.607630 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2hht" event={"ID":"2809182f-c20c-4e72-adf5-530c2d40840c","Type":"ContainerStarted","Data":"06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2"} Jan 29 16:25:23 crc kubenswrapper[5008]: I0129 16:25:23.637774 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s2hht" podStartSLOduration=3.051307745 podStartE2EDuration="5.637690552s" podCreationTimestamp="2026-01-29 16:25:18 +0000 UTC" firstStartedPulling="2026-01-29 16:25:20.57556625 +0000 UTC m=+3464.248420497" lastFinishedPulling="2026-01-29 16:25:23.161949067 +0000 UTC m=+3466.834803304" observedRunningTime="2026-01-29 16:25:23.629688498 +0000 UTC m=+3467.302542745" watchObservedRunningTime="2026-01-29 16:25:23.637690552 +0000 UTC m=+3467.310544829" Jan 29 16:25:29 crc kubenswrapper[5008]: I0129 16:25:29.182045 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:29 crc kubenswrapper[5008]: I0129 16:25:29.182666 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:30 crc kubenswrapper[5008]: I0129 16:25:30.241195 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s2hht" podUID="2809182f-c20c-4e72-adf5-530c2d40840c" containerName="registry-server" probeResult="failure" output=< Jan 29 16:25:30 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 16:25:30 crc kubenswrapper[5008]: > Jan 29 16:25:39 crc kubenswrapper[5008]: I0129 16:25:39.261386 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:39 crc kubenswrapper[5008]: I0129 16:25:39.308251 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:39 crc kubenswrapper[5008]: I0129 16:25:39.497072 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2hht"] Jan 29 16:25:40 crc kubenswrapper[5008]: I0129 16:25:40.780522 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s2hht" podUID="2809182f-c20c-4e72-adf5-530c2d40840c" containerName="registry-server" containerID="cri-o://06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2" gracePeriod=2 Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.250298 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.409853 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn5kl\" (UniqueName: \"kubernetes.io/projected/2809182f-c20c-4e72-adf5-530c2d40840c-kube-api-access-zn5kl\") pod \"2809182f-c20c-4e72-adf5-530c2d40840c\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.410053 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-catalog-content\") pod \"2809182f-c20c-4e72-adf5-530c2d40840c\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.410217 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-utilities\") pod \"2809182f-c20c-4e72-adf5-530c2d40840c\" (UID: \"2809182f-c20c-4e72-adf5-530c2d40840c\") " Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.411587 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-utilities" (OuterVolumeSpecName: "utilities") pod "2809182f-c20c-4e72-adf5-530c2d40840c" (UID: "2809182f-c20c-4e72-adf5-530c2d40840c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.417507 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2809182f-c20c-4e72-adf5-530c2d40840c-kube-api-access-zn5kl" (OuterVolumeSpecName: "kube-api-access-zn5kl") pod "2809182f-c20c-4e72-adf5-530c2d40840c" (UID: "2809182f-c20c-4e72-adf5-530c2d40840c"). InnerVolumeSpecName "kube-api-access-zn5kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.513861 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.513923 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn5kl\" (UniqueName: \"kubernetes.io/projected/2809182f-c20c-4e72-adf5-530c2d40840c-kube-api-access-zn5kl\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.517679 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2809182f-c20c-4e72-adf5-530c2d40840c" (UID: "2809182f-c20c-4e72-adf5-530c2d40840c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.616194 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2809182f-c20c-4e72-adf5-530c2d40840c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.790084 5008 generic.go:334] "Generic (PLEG): container finished" podID="2809182f-c20c-4e72-adf5-530c2d40840c" containerID="06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2" exitCode=0 Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.790141 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2hht" event={"ID":"2809182f-c20c-4e72-adf5-530c2d40840c","Type":"ContainerDied","Data":"06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2"} Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.790179 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2hht" event={"ID":"2809182f-c20c-4e72-adf5-530c2d40840c","Type":"ContainerDied","Data":"687c944f8481c4a547038ab2abd77a9297aefe21d838e620abf54f813a58963d"} Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.790190 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2hht" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.790205 5008 scope.go:117] "RemoveContainer" containerID="06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.831636 5008 scope.go:117] "RemoveContainer" containerID="be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.851158 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2hht"] Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.855860 5008 scope.go:117] "RemoveContainer" containerID="f4a01d458f754455811715313bc20c5064ec7a1be09053c7988158ffee0c21e4" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.859226 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s2hht"] Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.924312 5008 scope.go:117] "RemoveContainer" containerID="06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2" Jan 29 16:25:41 crc kubenswrapper[5008]: E0129 16:25:41.924923 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2\": container with ID starting with 06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2 not found: ID does not exist" containerID="06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.924957 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2"} err="failed to get container status \"06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2\": rpc error: code = NotFound desc = could not find container \"06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2\": container with ID starting with 06f9bca47933a886980c35fc3661579550deeabb9a2705226a8bfad879afcdc2 not found: ID does not exist" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.924988 5008 scope.go:117] "RemoveContainer" containerID="be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94" Jan 29 16:25:41 crc kubenswrapper[5008]: E0129 16:25:41.925517 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94\": container with ID starting with be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94 not found: ID does not exist" containerID="be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.925581 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94"} err="failed to get container status \"be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94\": rpc error: code = NotFound desc = could not find container \"be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94\": container with ID starting with be6fc366a0ea3b3b5d56815a37ec23a1a1d2bad9702872c20292f61467626a94 not found: ID does not exist" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.925619 5008 scope.go:117] "RemoveContainer" containerID="f4a01d458f754455811715313bc20c5064ec7a1be09053c7988158ffee0c21e4" Jan 29 16:25:41 crc kubenswrapper[5008]: E0129 16:25:41.926093 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4a01d458f754455811715313bc20c5064ec7a1be09053c7988158ffee0c21e4\": container with ID starting with f4a01d458f754455811715313bc20c5064ec7a1be09053c7988158ffee0c21e4 not found: ID does not exist" containerID="f4a01d458f754455811715313bc20c5064ec7a1be09053c7988158ffee0c21e4" Jan 29 16:25:41 crc kubenswrapper[5008]: I0129 16:25:41.926142 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a01d458f754455811715313bc20c5064ec7a1be09053c7988158ffee0c21e4"} err="failed to get container status \"f4a01d458f754455811715313bc20c5064ec7a1be09053c7988158ffee0c21e4\": rpc error: code = NotFound desc = could not find container \"f4a01d458f754455811715313bc20c5064ec7a1be09053c7988158ffee0c21e4\": container with ID starting with f4a01d458f754455811715313bc20c5064ec7a1be09053c7988158ffee0c21e4 not found: ID does not exist" Jan 29 16:25:43 crc kubenswrapper[5008]: I0129 16:25:43.339762 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2809182f-c20c-4e72-adf5-530c2d40840c" path="/var/lib/kubelet/pods/2809182f-c20c-4e72-adf5-530c2d40840c/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136705023024447 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136705023017364 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136675616016525 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136675616015475 5ustar corecore